profile
viewpoint
Alessandro Arzilli aarzilli silence is foo

aarzilli/gdlv 691

GUI frontend for Delve

aarzilli/emacs-textobjects 9

vim-like text objects implementation for emacs

aarzilli/debugger-bibliography 4

Annotated debugger implementation bibliography

aarzilli/go-iconv 4

iconv binding for golang

aarzilli/delve 3

Delve is a debugger for the Go programming language.

aarzilli/badnext 2

finds bad pcln associations in go binaries by comparing cfg derived from source code with cfg derived from disassembly

aarzilli/chade 2

Small character encoding debugging utility

aarzilli/emacs-eclim 2

This project brings some of the great eclipse features to emacs developers. It is based on the eclim project, which provides eclipse features for vim.

issue commentgo-delve/delve

RPC error when waiting for remote debugger

You have something that connects to port 40000 and sends some sort of request, probably an HTTP client.

Djolivald

comment created time in 6 hours

issue closedgo-delve/delve

dlv not found after following installation instructions for MacOS

  1. What version of Delve are you using (dlv version)? latest

  2. What version of Go are you using? (go version)? 1.13.6

  3. What operating system and processor architecture are you using? MacOS

  4. What did you do? I followed the instructions for MacOs installation at https://github.com/go-delve/delve/blob/master/Documentation/installation/osx/install.md

  5. What did you expect to see? dlv usable as per the examples in the next step of the guide, https://github.com/go-delve/delve/blob/master/Documentation/cli/getting_started.md

  6. What did you see instead? dlv is not found

closed time in 6 hours

MichaelHindley

Pull request review commentgo-delve/delve

pkg/proc: Move breakpoint logic up to Target

 func (t *Thread) reloadGAtPC() error { 	cx := t.regs.CX() 	pc := t.regs.PC() -	// We are partially replicating the code of GdbserverThread.stepInstruction

PS. this would be a behaviour change so a refactoring wouldn't be the place to do it anyway.

derekparker

comment created time in 10 hours

Pull request review commentgo-delve/delve

pkg/proc: Move breakpoint logic up to Target

 func (t *Thread) reloadGAtPC() error { 	cx := t.regs.CX() 	pc := t.regs.PC() -	// We are partially replicating the code of GdbserverThread.stepInstruction

Looking into this more, is there really no way to get the value of fs_base out of LLDB/GDB?

GDB, yes it is actually in returned by the 'g' command (but this code doesn't work with gdb for other reasons). LLDB, no, as far as I know.

derekparker

comment created time in 10 hours

PR opened go-delve/delve

proc: optimize parseG
proc: optimize parseG

runtime.g is a large and growing struct, we only need a few fields.
Instead of using loadValue to load the full contents of g, cache its
memory and then only load the fields we care about.

Benchmark before:

BenchmarkConditionalBreakpoints-4              1        14586710018 ns/op

Benchmark after:

BenchmarkConditionalBreakpoints-4   	       1	12476166303 ns/op

Conditional breakpoint evaluation: 1.45ms -> 1.24ms

Updates #1549

+64 -40

0 comment

3 changed files

pr created time in a day

push eventaarzilli/delve

Alessandro Arzilli

commit sha 8fa6d82177c155e3acab788afa10aba4e103026e

logflags: reduce default loglevel to Error (#1864) Logged errors should be visible even if the corresponding flag was not selected.

view details

polinasok

commit sha fbc4623c08700cf65cd70ff8c7537dadf3e0ed8e

service/dap: Initial implementation for 'dlv dap' (#1858) * Initial implementation for 'dlv dap' * Fix Travis and AppVeyor failures * Address review comments * Address review comments * Regenrate documentation * Replace dap server printfs with log.Error * Update 'dap log' * Fix typos * Revert logflags changes that got mixed in by accident

view details

Alessandro Arzilli

commit sha c272212baa3b3ff81408b1c6f323c8ad8b14e93a

proc/native: optimize native.status through buffering (#1865) Benchmark before: BenchmarkConditionalBreakpoints-4 1 15649407130 ns/op Benchmark after: BenchmarkConditionalBreakpoints-4 1 14586710018 ns/op Conditional breakpoint evaluation 1.56ms -> 1.45ms Updates #1549

view details

aarzilli

commit sha 2d33157403c4172b2aad13e9b44bc2aabd07134a

proc: optimize parseG runtime.g is a large and growing struct, we only need a few fields. Instead of using loadValue to load the full contents of g, cache its memory and then only load the fields we care about. Benchmark before: BenchmarkConditionalBreakpoints-4 1 14586710018 ns/op Benchmark after: BenchmarkConditionalBreakpoints-4 1 12476166303 ns/op Conditional breakpoint evaluation: 1.45ms -> 1.24ms Updates #1549

view details

push time in a day

Pull request review commentgo-delve/delve

pkg/proc: Move breakpoint logic up to Target

 package proc +import (+	"errors"+	"go/ast"+)++// ErrDirChange is returned when trying to change execution direction+// while there are still internal breakpoints set.+var ErrDirChange = errors.New("direction change with internal breakpoints")+ // Target represents the process being debugged. type Target struct { 	Process +	// Breakpoint table, holds information on breakpoints.+	// Maps instruction address to Breakpoint struct.+	breakpoints BreakpointMap+ 	// fncallForG stores a mapping of current active function calls. 	fncallForG map[int]*callInjection  	// gcache is a cache for Goroutines that we 	// have read and parsed from the targets memory. 	// This must be cleared whenever the target is resumed. 	gcache goroutineCache++	// threadToBreakpoint maps threads to the breakpoint that they+	// have were trapped on.+	threadToBreakpoint map[int]*BreakpointState

calling proc.Register

It does occour to me now that proc.Register is already take, but you get the point.

Also, I'm not in principle opposed to proc.Launch/proc.Attach it's just that I don't see it as a big win and having the backends be unable to see inside pkg/proc is too great of a downside.

derekparker

comment created time in a day

Pull request review commentgo-delve/delve

pkg/proc: Move breakpoint logic up to Target

 package proc +import (+	"errors"+	"go/ast"+)++// ErrDirChange is returned when trying to change execution direction+// while there are still internal breakpoints set.+var ErrDirChange = errors.New("direction change with internal breakpoints")+ // Target represents the process being debugged. type Target struct { 	Process +	// Breakpoint table, holds information on breakpoints.+	// Maps instruction address to Breakpoint struct.+	breakpoints BreakpointMap+ 	// fncallForG stores a mapping of current active function calls. 	fncallForG map[int]*callInjection  	// gcache is a cache for Goroutines that we 	// have read and parsed from the targets memory. 	// This must be cleared whenever the target is resumed. 	gcache goroutineCache++	// threadToBreakpoint maps threads to the breakpoint that they+	// have were trapped on.+	threadToBreakpoint map[int]*BreakpointState

I guess my position on it is, we don't expect or want anything outside of proc to import the backends directly, so why not enforce that using an internal import path? Right now the only reason why the backends are being imported is, as you mentioned, the Launch, Attach, etc functions. To me though that feels backwards as well

I don't agree with this. Yes, we don't expect anything outside of debugger to import the backends, we also do not expect anything outside of debugger to import proc itself. And we don't expect anything outside of service to import debugger.

And I don't think having the constructors in the backend is backwards: you want to use the native backend, you import the native backend. If having the switch statement duplicated in debugger and proc_test we can use a mechanism like database/sql with the init function of the backends calling proc.Register.

derekparker

comment created time in a day

Pull request review commentgo-delve/delve

pkg/proc: Move breakpoint logic up to Target

 func (t *Thread) reloadGAtPC() error { 	cx := t.regs.CX() 	pc := t.regs.PC() -	// We are partially replicating the code of GdbserverThread.stepInstruction

This code shouldn't be removed, removing it doesn't break debugserver on macOS because we can allocate memory on the target process but it will break lldb-server on linux and the spec is not clear on what would happen in this circumstance anyway.

derekparker

comment created time in a day

Pull request review commentgo-delve/delve

pkg/proc: Move breakpoint logic up to Target

 package proc +import (+	"errors"+	"go/ast"+)++// ErrDirChange is returned when trying to change execution direction+// while there are still internal breakpoints set.+var ErrDirChange = errors.New("direction change with internal breakpoints")+ // Target represents the process being debugged. type Target struct { 	Process +	// Breakpoint table, holds information on breakpoints.+	// Maps instruction address to Breakpoint struct.+	breakpoints BreakpointMap+ 	// fncallForG stores a mapping of current active function calls. 	fncallForG map[int]*callInjection  	// gcache is a cache for Goroutines that we 	// have read and parsed from the targets memory. 	// This must be cleared whenever the target is resumed. 	gcache goroutineCache++	// threadToBreakpoint maps threads to the breakpoint that they+	// have were trapped on.+	threadToBreakpoint map[int]*BreakpointState

They also all export the Process and Thread concrete types but there's no way to create them and we could very easily un-export them.

derekparker

comment created time in 3 days

Pull request review commentgo-delve/delve

pkg/proc: Move breakpoint logic up to Target

 package proc +import (+	"errors"+	"go/ast"+)++// ErrDirChange is returned when trying to change execution direction+// while there are still internal breakpoints set.+var ErrDirChange = errors.New("direction change with internal breakpoints")+ // Target represents the process being debugged. type Target struct { 	Process +	// Breakpoint table, holds information on breakpoints.+	// Maps instruction address to Breakpoint struct.+	breakpoints BreakpointMap+ 	// fncallForG stores a mapping of current active function calls. 	fncallForG map[int]*callInjection  	// gcache is a cache for Goroutines that we 	// have read and parsed from the targets memory. 	// This must be cleared whenever the target is resumed. 	gcache goroutineCache++	// threadToBreakpoint maps threads to the breakpoint that they+	// have were trapped on.+	threadToBreakpoint map[int]*BreakpointState

The backends aren't safe to use by themselves, so I'd prefer if nothing outside of proc is able to import them

If you look at what the backends actually export right now, almost everything you can call can be safely called by anyone, because the only things you can call are the launch and attach functions, which return Target objects. The exception here are the Ptrace* functions in the native package that are exported for no reason at all.

derekparker

comment created time in 3 days

PR opened go-delve/delve

proc/native: optimize native.status through buffering
proc/native: optimize native.status through buffering

Benchmark before:

BenchmarkConditionalBreakpoints-4              1        15649407130 ns/op

Benchmark after:

BenchmarkConditionalBreakpoints-4   	       1	14586710018 ns/op

Conditional breakpoint evaluation 1.56ms -> 1.45ms

Updates #1549

+3 -1

0 comment

1 changed file

pr created time in 4 days

push eventaarzilli/delve

aarzilli

commit sha 8b9d884cd298831efa6490d073d93a629b15ff2b

proc/native: optimize native.status through buffering Benchmark before: BenchmarkConditionalBreakpoints-4 1 15649407130 ns/op Benchmark after: BenchmarkConditionalBreakpoints-4 1 14586710018 ns/op Conditional breakpoint evaluation 1.56ms -> 1.45ms Updates #1549

view details

push time in 4 days

push eventaarzilli/delve

Alessandro Arzilli

commit sha 5b4f4a81b1f004eb86377c09931815e8f66afa27

proc: do not load g0 until it's needed when stacktracing (#1863) The stacktrace code occasionally needs the value of g.m.g0.sched.sp to switch stacks. Since this is only needed rarely and calling parseG is relatively expensive we should delay doing it until we know it will be needed. Benchmark before: BenchmarkConditionalBreakpoints-4 1 17326345671 ns/op Benchmark after: BenchmarkConditionalBreakpoints-4 1 15649407130 ns/op Reduces conditional breakpoint latency from 1.7ms to 1.56ms. Updates #1549

view details

chainhelen

commit sha a5d9dbee7958e9f6b5fdf3f80401769799dceef1

pkg,service: add cmd `examinemem`(`x`) for examining memory. (#1814) According to #1800 #1584 #1038, `dlv` should enable the user to dive into memory. User can print binary data in specific memory address range. But not support for sepecific variable name or structures temporarily.(Because I have no idea that modify `print` command.) Close #1584.

view details

Derek Parker

commit sha a277b15defba5fae12cec0a70bfa85171a02f872

proc/gdbserial: Reload thread registers on demand Instead of reloading the registers for every thread every time the process executes, reload the registers on demand for individual threads and memoize the result.

view details

aarzilli

commit sha 02e7060bad877eb2a5e970c741a2412c76a09143

proc/native: optimize native.status through buffering Benchmark before: BenchmarkConditionalBreakpoints-4 1 17294564246 ns/op Benchmark after: BenchmarkConditionalBreakpoints-4 1 15929810602 ns/op Conditional breakpoint evaluation 1.7ms -> 1.6ms Updates #1549

view details

push time in 4 days

push eventaarzilli/delve

aarzilli

commit sha 9d7cfab1e9538c5c0fe978a50967d4c5ed19aece

logflags: reduce default loglevel to Error Logged errors should be visible even if the corresponding flag was not selected.

view details

push time in 4 days

Pull request review commentgo-delve/delve

service/dap: Initial implementation for 'dlv dap'

+package dap++import (+	"bufio"+	"encoding/json"+	"fmt"+	"io"+	"net"+	"path/filepath"++	"github.com/go-delve/delve/pkg/logflags"+	"github.com/go-delve/delve/pkg/proc"+	"github.com/go-delve/delve/service"+	"github.com/go-delve/delve/service/api"+	"github.com/go-delve/delve/service/debugger"+	"github.com/google/go-dap"+	"github.com/sirupsen/logrus"+)++// Package dap implements VSCode's Debug Adaptor Protocol (DAP).+// This allows delve to communicate with frontends using DAP+// without a separate adaptor. The frontend will run the debugger+// (which now doubles as an adaptor) in server mode listening on+// a port and communicating over TCP. This is work in progress,+// so for now Delve in dap mode only supports synchronous+// request-response communication, blocking while processing each request.+// For DAP details see https://microsoft.github.io/debug-adapter-protocol.++// Server implements a DAP server that can accept a single client for+// a single debug session. It does not support restarting.+// The server operates via two goroutines:+// (1) Main goroutine where the server is created via NewServer(),+// started via Run() and stopped via Stop().+// (2) Run goroutine started from Run() that accepts a client connection,+// reads, decodes and processes each request, issuing commands to the+// underlying debugger and sending back events and responses.+// TODO(polina): make it asynchronous (i.e. launch goroutine per request)+type Server struct {+	// config is all the information necessary to start the debugger and server.+	config *service.Config+	// listener is used to accept the client connection.+	listener net.Listener+	// conn is the accepted client connection.+	conn net.Conn+	// reader is used to read requests from the connection.+	reader *bufio.Reader+	// debugger is the underlying debugger service.+	debugger *debugger.Debugger+	// log is used for structured logging.+	log *logrus.Entry+	// stopOnEntry is set to automatically stop the debugee after start.+	stopOnEntry bool+}++// NewServer creates a new DAP Server. It takes an opened Listener+// via config and assumes its ownerhsip. Optinally takes DisconnectChan+// via config, which can be used to detect when the client disconnects+// and the server is ready to be shut down. The caller must call+// Stop() on shutdown.+func NewServer(config *service.Config) *Server {+	logger := logflags.DAPLogger()+	logflags.WriteDAPListeningMessage(config.Listener.Addr().String())+	return &Server{+		config:   config,+		listener: config.Listener,+		log:      logger,+	}+}++// Stop stops the DAP debugger service, closes the listener and+// the client connection. It shuts down the underlying debugger+// and kills the target process if it was launched by it.+func (s *Server) Stop() {+	s.listener.Close()+	if s.conn != nil {+		// Unless Stop() was called after serveDAPCodec()+		// returned, this will result in closed connection error+		// on next read, breaking out of the read loop and+		// allowing the run goroutine to exit.+		s.conn.Close()+	}+	if s.debugger != nil {+		kill := s.config.AttachPid == 0+		if err := s.debugger.Detach(kill); err != nil {+			fmt.Println(err)+		}+	}+}++// signalDisconnect closes config.DisconnectChan if not nil, which+// signals that the client disconnected or there was a client+// connection failure. Since the server currently services only one+// client, this can be used as a signal to the entire server via+// Stop(). The function safeguards agaist closing the channel more+// than once and can be called multiple times. It is not thread-safe+// and is currently only called from the run goroutine.+// TODO(polina): lock this when we add more goroutines that could call+// this when we support asynchronous request-response communication.+func (s *Server) signalDisconnect() {+	// DisconnectChan might be nil at server creation if the+	// caller does not want to rely on the disconnect signal.+	if s.config.DisconnectChan != nil {+		close(s.config.DisconnectChan)+		// Take advantage of the nil check above to avoid accidentally+		// closing the channel twice and causing a panic, when this+		// function is called more than once. For example, we could+		// have the following sequence of events:+		// -- run goroutine: calls onDisconnectRequest()+		// -- run goroutine: calls signalDisconnect()+		// -- main goroutine: calls Stop()+		// -- main goroutine: Stop() closes client connection+		// -- run goroutine: serveDAPCodec() gets "closed network connection"+		// -- run goroutine: serveDAPCodec() returns+		// -- run goroutine: serveDAPCodec calls signalDisconnect()+		s.config.DisconnectChan = nil+	}+}++// Run launches a new goroutine where it accepts a client connection+// and starts processing requests from it. Use Stop() to close connection.+// The server does not support multiple clients, serially or in parallel.+// The server should be restarted for every new debug session.+// The debugger won't be started until launch/attach request is received.+// TODO(polina): allow new client connections for new debug sessions,+// so the editor needs to launch delve only once?+func (s *Server) Run() {+	go func() {+		conn, err := s.listener.Accept()+		if err != nil {+			// This will print if the server is killed with Ctrl+C+			// before client connection is accepted.+			fmt.Printf("Error accepting client connection: %s\n", err)+			s.signalDisconnect()+			return+		}+		s.conn = conn+		s.serveDAPCodec()+	}()+}++// serveDAPCodec reads and decodes requests from the client+// until it encounters an error or EOF, when it sends+// the disconnect signal and returns.+func (s *Server) serveDAPCodec() {+	defer s.signalDisconnect()+	s.reader = bufio.NewReader(s.conn)+	for {+		request, err := dap.ReadProtocolMessage(s.reader)+		// TODO(polina): Differentiate between errors and handle them+		// gracefully. For example,+		// -- "use of closed network connection" means client connection+		// was closed via Stop() in response to a disconnect request.+		// -- "Request command 'foo' is not supported" means we+		// potentially got some new DAP request that we do not yet have+		// decoding support for, so we can respond with an ErrorResponse.+		// TODO(polina): to support this add Seq to+		// dap.DecodeProtocolMessageFieldError.+		if err != nil {+			if err != io.EOF {+				fmt.Println("DAP error:", err)+			}+			return+		}+		s.handleRequest(request)+	}+}++func (s *Server) handleRequest(request dap.Message) {+	jsonmsg, _ := json.Marshal(request)+	s.log.Debug("[<- from client]", string(jsonmsg))++	switch request := request.(type) {+	case *dap.InitializeRequest:+		s.onInitializeRequest(request)+	case *dap.LaunchRequest:+		s.onLaunchRequest(request)+	case *dap.AttachRequest:+		s.sendUnsupportedErrorResponse(request.Request)+	case *dap.DisconnectRequest:+		s.onDisconnectRequest(request)+	case *dap.TerminateRequest:+		s.sendUnsupportedErrorResponse(request.Request)+	case *dap.RestartRequest:+		s.sendUnsupportedErrorResponse(request.Request)+	case *dap.SetBreakpointsRequest:+		s.onSetBreakpointsRequest(request)+	case *dap.SetFunctionBreakpointsRequest:+		s.sendUnsupportedErrorResponse(request.Request)+	case *dap.SetExceptionBreakpointsRequest:+		s.onSetExceptionBreakpointsRequest(request)+	case *dap.ConfigurationDoneRequest:+		s.onConfigurationDoneRequest(request)+	case *dap.ContinueRequest:+		s.onContinueRequest(request)+	case *dap.NextRequest:+		s.sendUnsupportedErrorResponse(request.Request)+	case *dap.StepInRequest:+		s.sendUnsupportedErrorResponse(request.Request)+	case *dap.StepOutRequest:+		s.sendUnsupportedErrorResponse(request.Request)+	case *dap.StepBackRequest:+		s.sendUnsupportedErrorResponse(request.Request)+	case *dap.ReverseContinueRequest:+		s.sendUnsupportedErrorResponse(request.Request)+	case *dap.RestartFrameRequest:+		s.sendUnsupportedErrorResponse(request.Request)+	case *dap.GotoRequest:+		s.sendUnsupportedErrorResponse(request.Request)+	case *dap.PauseRequest:+		s.sendUnsupportedErrorResponse(request.Request)+	case *dap.StackTraceRequest:+		s.sendUnsupportedErrorResponse(request.Request)+	case *dap.ScopesRequest:+		s.sendUnsupportedErrorResponse(request.Request)+	case *dap.VariablesRequest:+		s.sendUnsupportedErrorResponse(request.Request)+	case *dap.SetVariableRequest:+		s.sendUnsupportedErrorResponse(request.Request)+	case *dap.SetExpressionRequest:+		s.sendUnsupportedErrorResponse(request.Request)+	case *dap.SourceRequest:+		s.sendUnsupportedErrorResponse(request.Request)+	case *dap.ThreadsRequest:+		s.sendUnsupportedErrorResponse(request.Request)+	case *dap.TerminateThreadsRequest:+		s.sendUnsupportedErrorResponse(request.Request)+	case *dap.EvaluateRequest:+		s.sendUnsupportedErrorResponse(request.Request)+	case *dap.StepInTargetsRequest:+		s.sendUnsupportedErrorResponse(request.Request)+	case *dap.GotoTargetsRequest:+		s.sendUnsupportedErrorResponse(request.Request)+	case *dap.CompletionsRequest:+		s.sendUnsupportedErrorResponse(request.Request)+	case *dap.ExceptionInfoRequest:+		s.sendUnsupportedErrorResponse(request.Request)+	case *dap.LoadedSourcesRequest:+		s.sendUnsupportedErrorResponse(request.Request)+	case *dap.DataBreakpointInfoRequest:+		s.sendUnsupportedErrorResponse(request.Request)+	case *dap.SetDataBreakpointsRequest:+		s.sendUnsupportedErrorResponse(request.Request)+	case *dap.ReadMemoryRequest:+		s.sendUnsupportedErrorResponse(request.Request)+	case *dap.DisassembleRequest:+		s.sendUnsupportedErrorResponse(request.Request)+	case *dap.CancelRequest:+		s.sendUnsupportedErrorResponse(request.Request)+	case *dap.BreakpointLocationsRequest:+		s.sendUnsupportedErrorResponse(request.Request)+	default:+		// This is a DAP message that go-dap has a struct for, so+		// decoding succeeded, but this function does not know how+		// to handle. We should be sending an ErrorResponse, but+		// we cannot get to Seq and other fields from dap.Message.+		// TODO(polina): figure out how to handle this better.+		// Consider adding GetSeq() method to dap.Message interface.+		panic(fmt.Sprintf("Unable to process %#v", request))

I think you should use the logger with loglevel Error (see https://github.com/go-delve/delve/pull/1864)

polinasok

comment created time in 4 days

PR opened go-delve/delve

logflags: reduce default loglevel to Error
logflags: reduce default loglevel to Error

Logged errors should be visible even if the corresponding flag was not
selected.

+1 -1

0 comment

1 changed file

pr created time in 4 days

create barnchaarzilli/delve

branch : loglevel

created branch time in 4 days

Pull request review commentgo-delve/delve

service/dap: Initial implementation for 'dlv dap'

+package dap++import (+	"bufio"+	"encoding/json"+	"fmt"+	"io"+	"net"+	"path/filepath"++	"github.com/go-delve/delve/pkg/logflags"+	"github.com/go-delve/delve/pkg/proc"+	"github.com/go-delve/delve/service"+	"github.com/go-delve/delve/service/api"+	"github.com/go-delve/delve/service/debugger"+	"github.com/google/go-dap"+	"github.com/sirupsen/logrus"+)++// Package dap implements VSCode's Debug Adaptor Protocol (DAP).+// This allows delve to communicate with frontends using DAP+// without a separate adaptor. The frontend will run the debugger+// (which now doubles as an adaptor) in server mode listening on+// a port and communicating over TCP. This is work in progress,+// so for now Delve in dap mode only supports synchronous+// request-response communication, blocking while processing each request.+// For DAP details see https://microsoft.github.io/debug-adapter-protocol.++// Server implements a DAP server that can accept a single client for+// a single debug session. It does not support restarting.+// The server operates via two goroutines:+// (1) Main goroutine where the server is created via NewServer(),+// started via Run() and stopped via Stop().+// (2) Run goroutine started from Run() that accepts a client connection,+// reads, decodes and processes each request, issuing commands to the+// underlying debugger and sending back events and responses.+// TODO(polina): make it asynchronous (i.e. launch goroutine per request)+type Server struct {+	// config is all the information necessary to start the debugger and server.+	config *service.Config+	// listener is used to accept the client connection.+	listener net.Listener+	// conn is the accepted client connection.+	conn net.Conn+	// reader is used to read requests from the connection.+	reader *bufio.Reader+	// debugger is the underlying debugger service.+	debugger *debugger.Debugger+	// log is used for structured logging.+	log *logrus.Entry+	// stopOnEntry is set to automatically stop the debugee after start.+	stopOnEntry bool+}++// NewServer creates a new DAP Server. It takes an opened Listener+// via config and assumes its ownerhsip. Optinally takes DisconnectChan+// via config, which can be used to detect when the client disconnects+// and the server is ready to be shut down. The caller must call+// Stop() on shutdown.+func NewServer(config *service.Config) *Server {+	logger := logflags.DAPLogger()+	logflags.WriteDAPListeningMessage(config.Listener.Addr().String())+	return &Server{+		config:   config,+		listener: config.Listener,+		log:      logger,+	}+}++// Stop stops the DAP debugger service, closes the listener and+// the client connection. It shuts down the underlying debugger+// and kills the target process if it was launched by it.+func (s *Server) Stop() {+	s.listener.Close()+	if s.conn != nil {+		// Unless Stop() was called after serveDAPCodec()+		// returned, this will result in closed connection error+		// on next read, breaking out of the read loop and+		// allowing the run goroutine to exit.+		s.conn.Close()+	}+	if s.debugger != nil {+		kill := s.config.AttachPid == 0+		if err := s.debugger.Detach(kill); err != nil {+			fmt.Println(err)+		}+	}+}++// signalDisconnect closes config.DisconnectChan if not nil, which+// signals that the client disconnected or there was a client+// connection failure. Since the server currently services only one+// client, this can be used as a signal to the entire server via+// Stop(). The function safeguards agaist closing the channel more+// than once and can be called multiple times. It is not thread-safe+// and is currently only called from the run goroutine.+// TODO(polina): lock this when we add more goroutines that could call+// this when we support asynchronous request-response communication.+func (s *Server) signalDisconnect() {+	// DisconnectChan might be nil at server creation if the+	// caller does not want to rely on the disconnect signal.+	if s.config.DisconnectChan != nil {+		close(s.config.DisconnectChan)+		// Take advantage of the nil check above to avoid accidentally+		// closing the channel twice and causing a panic, when this+		// function is called more than once. For example, we could+		// have the following sequence of events:+		// -- run goroutine: calls onDisconnectRequest()+		// -- run goroutine: calls signalDisconnect()+		// -- main goroutine: calls Stop()+		// -- main goroutine: Stop() closes client connection+		// -- run goroutine: serveDAPCodec() gets "closed network connection"+		// -- run goroutine: serveDAPCodec() returns+		// -- run goroutine: serveDAPCodec calls signalDisconnect()+		s.config.DisconnectChan = nil+	}+}++// Run launches a new goroutine where it accepts a client connection+// and starts processing requests from it. Use Stop() to close connection.+// The server does not support multiple clients, serially or in parallel.+// The server should be restarted for every new debug session.+// The debugger won't be started until launch/attach request is received.+// TODO(polina): allow new client connections for new debug sessions,+// so the editor needs to launch delve only once?+func (s *Server) Run() {+	go func() {+		conn, err := s.listener.Accept()+		if err != nil {+			// This will print if the server is killed with Ctrl+C+			// before client connection is accepted.+			fmt.Printf("Error accepting client connection: %s\n", err)+			s.signalDisconnect()+			return+		}+		s.conn = conn+		s.serveDAPCodec()+	}()+}++// serveDAPCodec reads and decodes requests from the client+// until it encounters an error or EOF, when it sends+// the disconnect signal and returns.+func (s *Server) serveDAPCodec() {+	defer s.signalDisconnect()+	s.reader = bufio.NewReader(s.conn)+	for {+		request, err := dap.ReadProtocolMessage(s.reader)+		// TODO(polina): Differentiate between errors and handle them+		// gracefully. For example,+		// -- "use of closed network connection" means client connection+		// was closed via Stop() in response to a disconnect request.+		// -- "Request command 'foo' is not supported" means we+		// potentially got some new DAP request that we do not yet have+		// decoding support for, so we can respond with an ErrorResponse.+		// TODO(polina): to support this add Seq to+		// dap.DecodeProtocolMessageFieldError.+		if err != nil {+			if err != io.EOF {+				fmt.Println("DAP error:", err)+			}+			return+		}+		s.handleRequest(request)+	}+}++func (s *Server) handleRequest(request dap.Message) {+	jsonmsg, _ := json.Marshal(request)

Ok.

polinasok

comment created time in 4 days

Pull request review commentgo-delve/delve

service/dap: Initial implementation for 'dlv dap'

+package dap++import (+	"bufio"+	"encoding/json"+	"fmt"+	"io"+	"net"+	"path/filepath"++	"github.com/go-delve/delve/pkg/logflags"+	"github.com/go-delve/delve/pkg/proc"+	"github.com/go-delve/delve/service"+	"github.com/go-delve/delve/service/api"+	"github.com/go-delve/delve/service/debugger"+	"github.com/google/go-dap"+	"github.com/sirupsen/logrus"+)++// Package dap implements VSCode's Debug Adaptor Protocol (DAP).+// This allows delve to communicate with frontends using DAP+// without a separate adaptor. The frontend will run the debugger+// (which now doubles as an adaptor) in server mode listening on+// a port and communicating over TCP. This is work in progress,+// so for now Delve in dap mode only supports synchronous+// request-response communication, blocking while processing each request.+// For DAP details see https://microsoft.github.io/debug-adapter-protocol.++// Server implements a DAP server that can accept a single client for+// a single debug session. It does not support restarting.+// The server operates via two goroutines:+// (1) Main goroutine where the server is created via NewServer(),+// started via Run() and stopped via Stop().+// (2) Run goroutine started from Run() that accepts a client connection,+// reads, decodes and processes each request, issuing commands to the+// underlying debugger and sending back events and responses.+// TODO(polina): make it asynchronous (i.e. launch goroutine per request)+type Server struct {+	// config is all the information necessary to start the debugger and server.+	config *service.Config+	// listener is used to accept the client connection.+	listener net.Listener+	// conn is the accepted client connection.+	conn net.Conn+	// reader is used to read requests from the connection.+	reader *bufio.Reader+	// debugger is the underlying debugger service.+	debugger *debugger.Debugger+	// log is used for structured logging.+	log *logrus.Entry+	// stopOnEntry is set to automatically stop the debugee after start.+	stopOnEntry bool+}++// NewServer creates a new DAP Server. It takes an opened Listener+// via config and assumes its ownerhsip. Optinally takes DisconnectChan+// via config, which can be used to detect when the client disconnects+// and the server is ready to be shut down. The caller must call+// Stop() on shutdown.+func NewServer(config *service.Config) *Server {+	logger := logflags.DAPLogger()+	logflags.WriteDAPListeningMessage(config.Listener.Addr().String())+	return &Server{+		config:   config,+		listener: config.Listener,+		log:      logger,+	}+}++// Stop stops the DAP debugger service, closes the listener and+// the client connection. It shuts down the underlying debugger+// and kills the target process if it was launched by it.+func (s *Server) Stop() {+	s.listener.Close()+	if s.conn != nil {+		// Unless Stop() was called after serveDAPCodec()+		// returned, this will result in closed connection error+		// on next read, breaking out of the read loop and+		// allowing the run goroutine to exit.+		s.conn.Close()+	}+	if s.debugger != nil {+		kill := s.config.AttachPid == 0+		if err := s.debugger.Detach(kill); err != nil {+			fmt.Println(err)+		}+	}+}++// signalDisconnect closes config.DisconnectChan if not nil, which+// signals that the client disconnected or there was a client+// connection failure. Since the server currently services only one+// client, this can be used as a signal to the entire server via+// Stop(). The function safeguards agaist closing the channel more+// than once and can be called multiple times. It is not thread-safe+// and is currently only called from the run goroutine.+// TODO(polina): lock this when we add more goroutines that could call+// this when we support asynchronous request-response communication.+func (s *Server) signalDisconnect() {+	// DisconnectChan might be nil at server creation if the+	// caller does not want to rely on the disconnect signal.+	if s.config.DisconnectChan != nil {+		close(s.config.DisconnectChan)+		// Take advantage of the nil check above to avoid accidentally+		// closing the channel twice and causing a panic, when this+		// function is called more than once. For example, we could+		// have the following sequence of events:+		// -- run goroutine: calls onDisconnectRequest()+		// -- run goroutine: calls signalDisconnect()+		// -- main goroutine: calls Stop()+		// -- main goroutine: Stop() closes client connection+		// -- run goroutine: serveDAPCodec() gets "closed network connection"+		// -- run goroutine: serveDAPCodec() returns+		// -- run goroutine: serveDAPCodec calls signalDisconnect()+		s.config.DisconnectChan = nil+	}+}++// Run launches a new goroutine where it accepts a client connection+// and starts processing requests from it. Use Stop() to close connection.+// The server does not support multiple clients, serially or in parallel.+// The server should be restarted for every new debug session.+// The debugger won't be started until launch/attach request is received.+// TODO(polina): allow new client connections for new debug sessions,+// so the editor needs to launch delve only once?+func (s *Server) Run() {+	go func() {+		conn, err := s.listener.Accept()+		if err != nil {+			// This will print if the server is killed with Ctrl+C+			// before client connection is accepted.+			fmt.Printf("Error accepting client connection: %s\n", err)+			s.signalDisconnect()+			return+		}+		s.conn = conn+		s.serveDAPCodec()+	}()+}++// serveDAPCodec reads and decodes requests from the client+// until it encounters an error or EOF, when it sends+// the disconnect signal and returns.+func (s *Server) serveDAPCodec() {+	defer s.signalDisconnect()+	s.reader = bufio.NewReader(s.conn)+	for {+		request, err := dap.ReadProtocolMessage(s.reader)+		// TODO(polina): Differentiate between errors and handle them+		// gracefully. For example,+		// -- "use of closed network connection" means client connection+		// was closed via Stop() in response to a disconnect request.+		// -- "Request command 'foo' is not supported" means we+		// potentially got some new DAP request that we do not yet have+		// decoding support for, so we can respond with an ErrorResponse.+		// TODO(polina): to support this add Seq to+		// dap.DecodeProtocolMessageFieldError.+		if err != nil {+			if err != io.EOF {+				fmt.Println("DAP error:", err)+			}+			return+		}+		s.handleRequest(request)+	}+}++func (s *Server) handleRequest(request dap.Message) {+	jsonmsg, _ := json.Marshal(request)+	s.log.Debug("[<- from client]", string(jsonmsg))++	switch request := request.(type) {+	case *dap.InitializeRequest:+		s.onInitializeRequest(request)+	case *dap.LaunchRequest:+		s.onLaunchRequest(request)+	case *dap.AttachRequest:+		s.sendUnsupportedErrorResponse(request.Request)+	case *dap.DisconnectRequest:+		s.onDisconnectRequest(request)+	case *dap.TerminateRequest:+		s.sendUnsupportedErrorResponse(request.Request)+	case *dap.RestartRequest:+		s.sendUnsupportedErrorResponse(request.Request)+	case *dap.SetBreakpointsRequest:+		s.onSetBreakpointsRequest(request)+	case *dap.SetFunctionBreakpointsRequest:+		s.sendUnsupportedErrorResponse(request.Request)+	case *dap.SetExceptionBreakpointsRequest:+		s.onSetExceptionBreakpointsRequest(request)+	case *dap.ConfigurationDoneRequest:+		s.onConfigurationDoneRequest(request)+	case *dap.ContinueRequest:+		s.onContinueRequest(request)+	case *dap.NextRequest:+		s.sendUnsupportedErrorResponse(request.Request)+	case *dap.StepInRequest:+		s.sendUnsupportedErrorResponse(request.Request)+	case *dap.StepOutRequest:+		s.sendUnsupportedErrorResponse(request.Request)+	case *dap.StepBackRequest:+		s.sendUnsupportedErrorResponse(request.Request)+	case *dap.ReverseContinueRequest:+		s.sendUnsupportedErrorResponse(request.Request)+	case *dap.RestartFrameRequest:+		s.sendUnsupportedErrorResponse(request.Request)+	case *dap.GotoRequest:+		s.sendUnsupportedErrorResponse(request.Request)+	case *dap.PauseRequest:+		s.sendUnsupportedErrorResponse(request.Request)+	case *dap.StackTraceRequest:+		s.sendUnsupportedErrorResponse(request.Request)+	case *dap.ScopesRequest:+		s.sendUnsupportedErrorResponse(request.Request)+	case *dap.VariablesRequest:+		s.sendUnsupportedErrorResponse(request.Request)+	case *dap.SetVariableRequest:+		s.sendUnsupportedErrorResponse(request.Request)+	case *dap.SetExpressionRequest:+		s.sendUnsupportedErrorResponse(request.Request)+	case *dap.SourceRequest:+		s.sendUnsupportedErrorResponse(request.Request)+	case *dap.ThreadsRequest:+		s.sendUnsupportedErrorResponse(request.Request)+	case *dap.TerminateThreadsRequest:+		s.sendUnsupportedErrorResponse(request.Request)+	case *dap.EvaluateRequest:+		s.sendUnsupportedErrorResponse(request.Request)+	case *dap.StepInTargetsRequest:+		s.sendUnsupportedErrorResponse(request.Request)+	case *dap.GotoTargetsRequest:+		s.sendUnsupportedErrorResponse(request.Request)+	case *dap.CompletionsRequest:+		s.sendUnsupportedErrorResponse(request.Request)+	case *dap.ExceptionInfoRequest:+		s.sendUnsupportedErrorResponse(request.Request)+	case *dap.LoadedSourcesRequest:+		s.sendUnsupportedErrorResponse(request.Request)+	case *dap.DataBreakpointInfoRequest:+		s.sendUnsupportedErrorResponse(request.Request)+	case *dap.SetDataBreakpointsRequest:+		s.sendUnsupportedErrorResponse(request.Request)+	case *dap.ReadMemoryRequest:+		s.sendUnsupportedErrorResponse(request.Request)+	case *dap.DisassembleRequest:+		s.sendUnsupportedErrorResponse(request.Request)+	case *dap.CancelRequest:+		s.sendUnsupportedErrorResponse(request.Request)+	case *dap.BreakpointLocationsRequest:+		s.sendUnsupportedErrorResponse(request.Request)+	default:+		// This is a DAP message that go-dap has a struct for, so+		// decoding succeeded, but this function does not know how+		// to handle. We should be sending an ErrorResponse, but+		// we cannot get to Seq and other fields from dap.Message.+		// TODO(polina): figure out how to handle this better.+		// Consider adding GetSeq() method to dap.Message interface.+		panic(fmt.Sprintf("Unable to process %#v", request))+	}+}++func (s *Server) send(message dap.Message) {+	jsonmsg, _ := json.Marshal(message)+	s.log.Debug("[-> to client]", string(jsonmsg))+	dap.WriteProtocolMessage(s.conn, message)+}++func (s *Server) onInitializeRequest(request *dap.InitializeRequest) {+	// TODO(polina): Respond with an error if debug session is in progress?+	response := &dap.InitializeResponse{Response: *newResponse(request.Request)}+	response.Body.SupportsConfigurationDoneRequest = true+	// TODO(polina): support this to match vscode-go functionality+	response.Body.SupportsSetVariable = false+	s.send(response)+}++func (s *Server) onLaunchRequest(request *dap.LaunchRequest) {+	// TODO(polina): Respond with an error if debug session is in progress?+	program, ok := request.Arguments["program"]+	if !ok || program == "" {+		s.sendErrorResponse(request.Request,+			3000, "Failed to launch",+			"The program attribute is missing in debug configuration.")+		return+	}+	s.config.ProcessArgs = []string{program.(string)}+	s.config.WorkingDir = filepath.Dir(program.(string))+	// TODO: support program args++	stop, ok := request.Arguments["stopOnEntry"]+	s.stopOnEntry = (ok && stop == true)++	mode, ok := request.Arguments["mode"]+	if !ok || mode == "" {+		mode = "debug"+	}+	// TODO(polina): support "debug", "test" and "remote" modes+	if mode != "exec" {+		s.sendErrorResponse(request.Request,+			3000, "Failed to launch",+			fmt.Sprintf("Unsupported 'mode' value '%s' in debug configuration.", mode))+		return+	}++	config := &debugger.Config{+		WorkingDir:           s.config.WorkingDir,+		AttachPid:            0,+		CoreFile:             "",+		Backend:              s.config.Backend,+		Foreground:           s.config.Foreground,+		DebugInfoDirectories: s.config.DebugInfoDirectories,+		CheckGoVersion:       s.config.CheckGoVersion,+	}+	var err error+	if s.debugger, err = debugger.New(config, s.config.ProcessArgs); err != nil {+		s.sendErrorResponse(request.Request,+			3000, "Failed to launch", err.Error())+		return+	}++	// Notify the client that the debugger is ready to start accepting+	// configuration requests for setting breakpoints, etc. The client+	// will end the configuration sequence with 'configurationDone'.+	s.send(&dap.InitializedEvent{Event: *newEvent("initialized")})+	s.send(&dap.LaunchResponse{Response: *newResponse(request.Request)})+}++// onDisconnectRequest handles the DisconnectRequest. Per the DAP spec,+// it disconnects the debuggee and signals that the debug adaptor+// (in our case this TCP server) can be terminated.+func (s *Server) onDisconnectRequest(request *dap.DisconnectRequest) {+	s.send(&dap.DisconnectResponse{Response: *newResponse(request.Request)})+	// TODO(polina): only halt if the program is running+	if s.debugger != nil {+		_, err := s.debugger.Command(&api.DebuggerCommand{Name: api.Halt})+		if err != nil {+			s.log.Error(err)+		}+		kill := s.config.AttachPid == 0+		err = s.debugger.Detach(kill)+		if err != nil {+			s.log.Error(err)+		}+	}+	// TODO(polina): make thread-safe when handlers become asynchronous.+	s.signalDisconnect()+}++func (s *Server) onSetBreakpointsRequest(request *dap.SetBreakpointsRequest) {+	if request.Arguments.Source.Path == "" {+		s.log.Error("ERROR: Unable to set breakpoint for empty file path")+	}+	response := &dap.SetBreakpointsResponse{Response: *newResponse(request.Request)}+	response.Body.Breakpoints = make([]dap.Breakpoint, len(request.Arguments.Breakpoints))+	// Only verified breakpoints will be set and reported back in the+	// response. All breakpoints resulting in errors (e.g. duplicates+	// or lines that do not have statements) will be skipped.+	i := 0+	for _, b := range request.Arguments.Breakpoints {+		bp, err := s.debugger.CreateBreakpoint(+			&api.Breakpoint{File: request.Arguments.Source.Path, Line: b.Line})+		if err != nil {+			s.log.Error("ERROR:", err)+			continue+		}+		response.Body.Breakpoints[i].Verified = true+		response.Body.Breakpoints[i].Line = bp.Line+		i+++	}+	response.Body.Breakpoints = response.Body.Breakpoints[:i]+	s.send(response)+}++func (s *Server) onSetExceptionBreakpointsRequest(request *dap.SetExceptionBreakpointsRequest) {+	// Unlike what DAP documentation claims, this request is always sent+	// even though we specified no filters at initializatin. Handle as no-op.+	s.send(&dap.SetExceptionBreakpointsResponse{Response: *newResponse(request.Request)})+}++func (s *Server) onConfigurationDoneRequest(request *dap.ConfigurationDoneRequest) {+	if s.stopOnEntry {+		e := &dap.StoppedEvent{+			Event: *newEvent("stopped"),+			Body:  dap.StoppedEventBody{Reason: "breakpoint", ThreadId: 1, AllThreadsStopped: true},+		}+		s.send(e)+	}+	s.send(&dap.ConfigurationDoneResponse{Response: *newResponse(request.Request)})+	if !s.stopOnEntry {+		s.doContinue()+	}+}++func (s *Server) onContinueRequest(request *dap.ContinueRequest) {+	s.send(&dap.ContinueResponse{Response: *newResponse(request.Request)})+	s.doContinue()+}++func (s *Server) sendErrorResponse(request dap.Request, id int, summary string, details string) {+	er := &dap.ErrorResponse{}+	er.Type = "response"+	er.Command = request.Command+	er.RequestSeq = request.Seq+	er.Success = false+	er.Message = summary+	er.Body.Error.Id = id+	er.Body.Error.Format = fmt.Sprintf("%s: %s", summary, details)+	s.log.Error(er.Body.Error.Format)+	s.send(er)+}++func (s *Server) sendUnsupportedErrorResponse(request dap.Request) {+	s.sendErrorResponse(request, 9999, "Unsupported command",+		fmt.Sprintf("cannot process '%s' request", request.Command))+}++func newResponse(request dap.Request) *dap.Response {+	return &dap.Response{+		ProtocolMessage: dap.ProtocolMessage{+			Seq:  0,+			Type: "response",+		},+		Command:    request.Command,+		RequestSeq: request.Seq,+		Success:    true,+	}+}++func newEvent(event string) *dap.Event {+	return &dap.Event{+		ProtocolMessage: dap.ProtocolMessage{+			Seq:  0,+			Type: "event",+		},+		Event: event,+	}+}++func (s *Server) doContinue() {+	if s.debugger == nil {+		return+	}+	state, err := s.debugger.Command(&api.DebuggerCommand{Name: api.Continue})+	if err != nil {+		s.log.Error(err)+		switch err.(type) {+		case proc.ErrProcessExited:+			e := &dap.TerminatedEvent{Event: *newEvent("terminated")}+			s.send(e)+		default:+		}+		return+	}+	if state.Exited {+		e := &dap.TerminatedEvent{Event: *newEvent("terminated")}+		s.send(e)+	} else {+		e := &dap.StoppedEvent{Event: *newEvent("stopped")}+		e.Body.Reason = "breakpoint"

I don't think I can tell from the debugger if this was a pause or breakpoint, can I? I guess I could keep track if this was called in a context of a breakpoint or a halt request.

Yes, you would have to keep track (and there would be a race condition if the user requested a pause just as the program hit a breakpoint).

polinasok

comment created time in 4 days

Pull request review commentgo-delve/delve

service/dap: Initial implementation for 'dlv dap'

+package dap++import (+	"bufio"+	"encoding/json"+	"fmt"+	"io"+	"net"+	"path/filepath"++	"github.com/go-delve/delve/pkg/logflags"+	"github.com/go-delve/delve/pkg/proc"+	"github.com/go-delve/delve/service"+	"github.com/go-delve/delve/service/api"+	"github.com/go-delve/delve/service/debugger"+	"github.com/google/go-dap"+	"github.com/sirupsen/logrus"+)++// Package dap implements VSCode's Debug Adaptor Protocol (DAP).+// This allows delve to communicate with frontends using DAP+// without a separate adaptor. The frontend will run the debugger+// (which now doubles as an adaptor) in server mode listening on+// a port and communicating over TCP. This is work in progress,+// so for now Delve in dap mode only supports synchronous+// request-response communication, blocking while processing each request.+// For DAP details see https://microsoft.github.io/debug-adapter-protocol.++// Server implements a DAP server that can accept a single client for+// a single debug session. It does not support restarting.+// The server operates via two goroutines:+// (1) Main goroutine where the server is created via NewServer(),+// started via Run() and stopped via Stop().+// (2) Run goroutine started from Run() that accepts a client connection,+// reads, decodes and processes each request, issuing commands to the+// underlying debugger and sending back events and responses.+// TODO(polina): make it asynchronous (i.e. launch goroutine per request)+type Server struct {+	// config is all the information necessary to start the debugger and server.+	config *service.Config+	// listener is used to accept the client connection.+	listener net.Listener+	// conn is the accepted client connection.+	conn net.Conn+	// reader is used to read requests from the connection.+	reader *bufio.Reader+	// debugger is the underlying debugger service.+	debugger *debugger.Debugger+	// log is used for structured logging.+	log *logrus.Entry+	// stopOnEntry is set to automatically stop the debugee after start.+	stopOnEntry bool+}++// NewServer creates a new DAP Server. It takes an opened Listener+// via config and assumes its ownerhsip. Optinally takes DisconnectChan+// via config, which can be used to detect when the client disconnects+// and the server is ready to be shut down. The caller must call+// Stop() on shutdown.+func NewServer(config *service.Config) *Server {+	logger := logflags.DAPLogger()+	logflags.WriteDAPListeningMessage(config.Listener.Addr().String())+	return &Server{+		config:   config,+		listener: config.Listener,+		log:      logger,+	}+}++// Stop stops the DAP debugger service, closes the listener and+// the client connection. It shuts down the underlying debugger+// and kills the target process if it was launched by it.+func (s *Server) Stop() {+	s.listener.Close()+	if s.conn != nil {+		// Unless Stop() was called after serveDAPCodec()+		// returned, this will result in closed connection error+		// on next read, breaking out of the read loop and+		// allowing the run goroutine to exit.+		s.conn.Close()+	}+	if s.debugger != nil {+		kill := s.config.AttachPid == 0+		if err := s.debugger.Detach(kill); err != nil {+			fmt.Println(err)+		}+	}+}++// signalDisconnect closes config.DisconnectChan if not nil, which+// signals that the client disconnected or there was a client+// connection failure. Since the server currently services only one+// client, this can be used as a signal to the entire server via+// Stop(). The function safeguards agaist closing the channel more+// than once and can be called multiple times. It is not thread-safe+// and is currently only called from the run goroutine.+// TODO(polina): lock this when we add more goroutines that could call+// this when we support asynchronous request-response communication.+func (s *Server) signalDisconnect() {+	// DisconnectChan might be nil at server creation if the+	// caller does not want to rely on the disconnect signal.+	if s.config.DisconnectChan != nil {+		close(s.config.DisconnectChan)+		// Take advantage of the nil check above to avoid accidentally+		// closing the channel twice and causing a panic, when this+		// function is called more than once. For example, we could+		// have the following sequence of events:+		// -- run goroutine: calls onDisconnectRequest()+		// -- run goroutine: calls signalDisconnect()+		// -- main goroutine: calls Stop()+		// -- main goroutine: Stop() closes client connection+		// -- run goroutine: serveDAPCodec() gets "closed network connection"+		// -- run goroutine: serveDAPCodec() returns+		// -- run goroutine: serveDAPCodec calls signalDisconnect()+		s.config.DisconnectChan = nil+	}+}++// Run launches a new goroutine where it accepts a client connection+// and starts processing requests from it. Use Stop() to close connection.+// The server does not support multiple clients, serially or in parallel.+// The server should be restarted for every new debug session.+// The debugger won't be started until launch/attach request is received.+// TODO(polina): allow new client connections for new debug sessions,+// so the editor needs to launch delve only once?+func (s *Server) Run() {+	go func() {+		conn, err := s.listener.Accept()+		if err != nil {+			// This will print if the server is killed with Ctrl+C+			// before client connection is accepted.+			fmt.Printf("Error accepting client connection: %s\n", err)+			s.signalDisconnect()+			return+		}+		s.conn = conn+		s.serveDAPCodec()+	}()+}++// serveDAPCodec reads and decodes requests from the client+// until it encounters an error or EOF, when it sends+// the disconnect signal and returns.+func (s *Server) serveDAPCodec() {+	defer s.signalDisconnect()+	s.reader = bufio.NewReader(s.conn)+	for {+		request, err := dap.ReadProtocolMessage(s.reader)+		// TODO(polina): Differentiate between errors and handle them+		// gracefully. For example,+		// -- "use of closed network connection" means client connection+		// was closed via Stop() in response to a disconnect request.+		// -- "Request command 'foo' is not supported" means we+		// potentially got some new DAP request that we do not yet have+		// decoding support for, so we can respond with an ErrorResponse.+		// TODO(polina): to support this add Seq to+		// dap.DecodeProtocolMessageFieldError.+		if err != nil {+			if err != io.EOF {+				fmt.Println("DAP error:", err)+			}+			return+		}+		s.handleRequest(request)+	}+}++func (s *Server) handleRequest(request dap.Message) {+	jsonmsg, _ := json.Marshal(request)+	s.log.Debug("[<- from client]", string(jsonmsg))++	switch request := request.(type) {+	case *dap.InitializeRequest:+		s.onInitializeRequest(request)+	case *dap.LaunchRequest:+		s.onLaunchRequest(request)+	case *dap.AttachRequest:+		s.sendUnsupportedErrorResponse(request.Request)+	case *dap.DisconnectRequest:+		s.onDisconnectRequest(request)+	case *dap.TerminateRequest:+		s.sendUnsupportedErrorResponse(request.Request)+	case *dap.RestartRequest:+		s.sendUnsupportedErrorResponse(request.Request)+	case *dap.SetBreakpointsRequest:+		s.onSetBreakpointsRequest(request)+	case *dap.SetFunctionBreakpointsRequest:+		s.sendUnsupportedErrorResponse(request.Request)+	case *dap.SetExceptionBreakpointsRequest:+		s.onSetExceptionBreakpointsRequest(request)+	case *dap.ConfigurationDoneRequest:+		s.onConfigurationDoneRequest(request)+	case *dap.ContinueRequest:+		s.onContinueRequest(request)+	case *dap.NextRequest:+		s.sendUnsupportedErrorResponse(request.Request)+	case *dap.StepInRequest:+		s.sendUnsupportedErrorResponse(request.Request)+	case *dap.StepOutRequest:+		s.sendUnsupportedErrorResponse(request.Request)+	case *dap.StepBackRequest:+		s.sendUnsupportedErrorResponse(request.Request)+	case *dap.ReverseContinueRequest:+		s.sendUnsupportedErrorResponse(request.Request)+	case *dap.RestartFrameRequest:+		s.sendUnsupportedErrorResponse(request.Request)+	case *dap.GotoRequest:+		s.sendUnsupportedErrorResponse(request.Request)+	case *dap.PauseRequest:+		s.sendUnsupportedErrorResponse(request.Request)+	case *dap.StackTraceRequest:+		s.sendUnsupportedErrorResponse(request.Request)+	case *dap.ScopesRequest:+		s.sendUnsupportedErrorResponse(request.Request)+	case *dap.VariablesRequest:+		s.sendUnsupportedErrorResponse(request.Request)+	case *dap.SetVariableRequest:+		s.sendUnsupportedErrorResponse(request.Request)+	case *dap.SetExpressionRequest:+		s.sendUnsupportedErrorResponse(request.Request)+	case *dap.SourceRequest:+		s.sendUnsupportedErrorResponse(request.Request)+	case *dap.ThreadsRequest:+		s.sendUnsupportedErrorResponse(request.Request)+	case *dap.TerminateThreadsRequest:+		s.sendUnsupportedErrorResponse(request.Request)+	case *dap.EvaluateRequest:+		s.sendUnsupportedErrorResponse(request.Request)+	case *dap.StepInTargetsRequest:+		s.sendUnsupportedErrorResponse(request.Request)+	case *dap.GotoTargetsRequest:+		s.sendUnsupportedErrorResponse(request.Request)+	case *dap.CompletionsRequest:+		s.sendUnsupportedErrorResponse(request.Request)+	case *dap.ExceptionInfoRequest:+		s.sendUnsupportedErrorResponse(request.Request)+	case *dap.LoadedSourcesRequest:+		s.sendUnsupportedErrorResponse(request.Request)+	case *dap.DataBreakpointInfoRequest:+		s.sendUnsupportedErrorResponse(request.Request)+	case *dap.SetDataBreakpointsRequest:+		s.sendUnsupportedErrorResponse(request.Request)+	case *dap.ReadMemoryRequest:+		s.sendUnsupportedErrorResponse(request.Request)+	case *dap.DisassembleRequest:+		s.sendUnsupportedErrorResponse(request.Request)+	case *dap.CancelRequest:+		s.sendUnsupportedErrorResponse(request.Request)+	case *dap.BreakpointLocationsRequest:+		s.sendUnsupportedErrorResponse(request.Request)+	default:+		// This is a DAP message that go-dap has a struct for, so+		// decoding succeeded, but this function does not know how+		// to handle. We should be sending an ErrorResponse, but+		// we cannot get to Seq and other fields from dap.Message.+		// TODO(polina): figure out how to handle this better.+		// Consider adding GetSeq() method to dap.Message interface.+		panic(fmt.Sprintf("Unable to process %#v", request))+	}+}++func (s *Server) send(message dap.Message) {+	jsonmsg, _ := json.Marshal(message)+	s.log.Debug("[-> to client]", string(jsonmsg))+	dap.WriteProtocolMessage(s.conn, message)+}++func (s *Server) onInitializeRequest(request *dap.InitializeRequest) {+	// TODO(polina): Respond with an error if debug session is in progress?+	response := &dap.InitializeResponse{Response: *newResponse(request.Request)}+	response.Body.SupportsConfigurationDoneRequest = true+	// TODO(polina): support this to match vscode-go functionality+	response.Body.SupportsSetVariable = false+	s.send(response)+}++func (s *Server) onLaunchRequest(request *dap.LaunchRequest) {+	// TODO(polina): Respond with an error if debug session is in progress?+	program, ok := request.Arguments["program"]+	if !ok || program == "" {+		s.sendErrorResponse(request.Request,+			3000, "Failed to launch",+			"The program attribute is missing in debug configuration.")+		return+	}+	s.config.ProcessArgs = []string{program.(string)}+	s.config.WorkingDir = filepath.Dir(program.(string))+	// TODO: support program args++	stop, ok := request.Arguments["stopOnEntry"]+	s.stopOnEntry = (ok && stop == true)++	mode, ok := request.Arguments["mode"]+	if !ok || mode == "" {+		mode = "debug"+	}+	// TODO(polina): support "debug", "test" and "remote" modes+	if mode != "exec" {+		s.sendErrorResponse(request.Request,+			3000, "Failed to launch",+			fmt.Sprintf("Unsupported 'mode' value '%s' in debug configuration.", mode))+		return+	}++	config := &debugger.Config{+		WorkingDir:           s.config.WorkingDir,+		AttachPid:            0,+		CoreFile:             "",+		Backend:              s.config.Backend,+		Foreground:           s.config.Foreground,+		DebugInfoDirectories: s.config.DebugInfoDirectories,+		CheckGoVersion:       s.config.CheckGoVersion,+	}+	var err error+	if s.debugger, err = debugger.New(config, s.config.ProcessArgs); err != nil {+		s.sendErrorResponse(request.Request,+			3000, "Failed to launch", err.Error())+		return+	}++	// Notify the client that the debugger is ready to start accepting+	// configuration requests for setting breakpoints, etc. The client+	// will end the configuration sequence with 'configurationDone'.+	s.send(&dap.InitializedEvent{Event: *newEvent("initialized")})+	s.send(&dap.LaunchResponse{Response: *newResponse(request.Request)})+}++// onDisconnectRequest handles the DisconnectRequest. Per the DAP spec,+// it disconnects the debuggee and signals that the debug adaptor+// (in our case this TCP server) can be terminated.+func (s *Server) onDisconnectRequest(request *dap.DisconnectRequest) {+	s.send(&dap.DisconnectResponse{Response: *newResponse(request.Request)})+	// TODO(polina): only halt if the program is running+	if s.debugger != nil {+		_, err := s.debugger.Command(&api.DebuggerCommand{Name: api.Halt})+		if err != nil {+			s.log.Error(err)+		}+		kill := s.config.AttachPid == 0+		err = s.debugger.Detach(kill)+		if err != nil {+			s.log.Error(err)+		}+	}+	// TODO(polina): make thread-safe when handlers become asynchronous.+	s.signalDisconnect()+}++func (s *Server) onSetBreakpointsRequest(request *dap.SetBreakpointsRequest) {

Would there be any issues with the debugger object if I were to call Debugger.CreateBreakpoint in one goroutine and while it was blocking another call to the debugger was made from another goroutine?

Yes, it would block until the other call to the debugger was finished. Debugger serializes all its calls with a mutex, with the exception of the Halt command and State with nowait == true.

polinasok

comment created time in 4 days

Pull request review commentgo-delve/delve

service/dap: Initial implementation for 'dlv dap'

 func remove(path string) { 	} } +func dapCmd(cmd *cobra.Command, args []string) {+	status := func() int {+		if err := logflags.Setup(Log, LogOutput, LogDest); err != nil {+			fmt.Fprintf(os.Stderr, "%v\n", err)+			return 1+		}+		defer logflags.Close()++		if Headless {+			fmt.Fprintf(os.Stderr, "Warning: headless mode not supported with dap\n")+		}+		if AcceptMulti {+			fmt.Fprintf(os.Stderr, "Warning: accept multiclient mode not supported with dap\n")+		}+		if InitFile != "" {+			fmt.Fprint(os.Stderr, "Warning: init file ignored with dap\n")+		}+		if ContinueOnStart {+			fmt.Fprintf(os.Stderr, "Warning: continue ignored with dap; specify via launch/attach request instead\n")+		}+		if BuildFlags != "" {+			fmt.Fprintf(os.Stderr, "Warning: build flags ignored with dap; specify via launch/attach request instead\n")+		}+		if WorkingDir != "" {+			fmt.Fprintf(os.Stderr, "Warning: working directory ignored with dap; launch requests must specify full program path\n")+		}+		dlvArgs, targetArgs := splitArgs(cmd, args)+		if len(dlvArgs) > 0 {+			fmt.Fprintf(os.Stderr, "Warning: debug arguments ignored with dap; specify via launch/attach request instead\n")+		}+		if len(targetArgs) > 0 {+			fmt.Fprintf(os.Stderr, "Warning: program flags ignored with dap; specify via launch/attach request instead\n")+		}++		listener, err := net.Listen("tcp", Addr)+		if err != nil {+			fmt.Printf("couldn't start listener: %s\n", err)

No, we should probably change the other one. Don't worry about it.

polinasok

comment created time in 4 days

PR closed aarzilli/golua

add GetState func

#75

+6 -2

0 comment

1 changed file

edolphin-ydf

pr closed time in 4 days

pull request commentaarzilli/golua

add GetState func on master branch

Thank you.

edolphin-ydf

comment created time in 4 days

push eventaarzilli/golua

edolphin

commit sha 30fc24b97a1611212cb8c09476c0d0558e7e13ab

add GetState func

view details

push time in 4 days

push eventaarzilli/golua

edolphin

commit sha 849bad3ad6ea9c9e82e08da984194f17c85a7349

add GetState func

view details

push time in 4 days

PR merged aarzilli/golua

add GetState func on master branch

#75

+6 -2

0 comment

1 changed file

edolphin-ydf

pr closed time in 4 days

push eventaarzilli/golua

edolphin

commit sha 421e0de0aa200830c11177a428340ca8ea5c766a

add GetState func

view details

push time in 4 days

Pull request review commentgo-delve/delve

pkg/proc: Move breakpoint logic up to Target

 import ( type Process interface { 	Info 	ProcessManipulation-	BreakpointManipulation 	RecordingManipulation } +// ProcessInternal holds a set of methods that are+// not meant to be called by anyone except for an instance of+// `proc.Target`. These methods are not safe to use by themselves+// and should never be called directly outside of the `proc` package.+// This is temporary and in support of an ongoing refactor.+type ProcessInternal interface {+	WriteBreakpointFn(addr uint64) (string, int, *Function, []byte, error)+	ClearBreakpointFn(uint64, []byte) error+	AdjustsPCAfterBreakpoint() bool+	CurrentDirection() Direction

Restart, SetSelectedGoroutine, ContinueOnce and Detach should also be moved here.

derekparker

comment created time in 4 days

issue commentaarzilli/golua

How about add method to get *C.lua_State?

If you submit a PR for this I will approve it.

edolphin-ydf

comment created time in 4 days

Pull request review commentgo-delve/delve

pkg/proc: Move breakpoint logic up to Target

 type Info interface { 	BinInfo() *BinaryInfo 	EntryPoint() (uint64, error) +	AdjustsPCAfterBreakpoint() bool

Let's call it AdjustPC and have it match the direction of adjustPC.

derekparker

comment created time in 4 days

Pull request review commentgo-delve/delve

pkg/proc: Move breakpoint logic up to Target

 package proc +import (+	"errors"+	"go/ast"+)++// ErrDirChange is returned when trying to change execution direction+// while there are still internal breakpoints set.+var ErrDirChange = errors.New("direction change with internal breakpoints")+ // Target represents the process being debugged. type Target struct { 	Process +	// Breakpoint table, holds information on breakpoints.+	// Maps instruction address to Breakpoint struct.+	breakpoints BreakpointMap+ 	// fncallForG stores a mapping of current active function calls. 	fncallForG map[int]*callInjection  	// gcache is a cache for Goroutines that we 	// have read and parsed from the targets memory. 	// This must be cleared whenever the target is resumed. 	gcache goroutineCache++	// threadToBreakpoint maps threads to the breakpoint that they+	// have were trapped on.+	threadToBreakpoint map[int]*BreakpointState

Either a third package will initialize them or the initialization will keep happening as it does now and their path won't have 'internal' in it. What does putting them into an internal package get us?

At the moment proc can call anything in a backend and the backend can call anything in proc. I'd like that to continue to be true.

derekparker

comment created time in 4 days

push eventgo-delve/delve

Derek Parker

commit sha a277b15defba5fae12cec0a70bfa85171a02f872

proc/gdbserial: Reload thread registers on demand Instead of reloading the registers for every thread every time the process executes, reload the registers on demand for individual threads and memoize the result.

view details

push time in 5 days

PR opened go-delve/delve

proc: do not load g0 until it's needed when stacktracing
proc: do not load g0 until it's needed when stacktracing

The stacktrace code occasionally needs the value of g.m.g0.sched.sp to
switch stacks. Since this is only needed rarely and calling parseG is
relatively expensive we should delay doing it until we know it will be
needed.

Benchmark before:

BenchmarkConditionalBreakpoints-4              1        17326345671 ns/op

Benchmark after:

BenchmarkConditionalBreakpoints-4   	       1	15649407130 ns/op

Reduces conditional breakpoint latency from 1.7ms to 1.56ms.

Updates #1549

+22 -11

0 comment

3 changed files

pr created time in 5 days

create barnchaarzilli/delve

branch : opt

created branch time in 5 days

Pull request review commentgo-delve/delve

service/dap: Initial implementation for 'dlv dap'

+package main

We already have other fixtures that don't do a whole lot, in the interest of avoiding fixture proliferation I suggest using increment.go or testargs.go

polinasok

comment created time in 5 days

Pull request review commentgo-delve/delve

service/dap: Initial implementation for 'dlv dap'

+package dap++import (+	"bufio"+	"encoding/json"+	"fmt"+	"io"+	"net"+	"path/filepath"++	"github.com/go-delve/delve/pkg/logflags"+	"github.com/go-delve/delve/pkg/proc"+	"github.com/go-delve/delve/service"+	"github.com/go-delve/delve/service/api"+	"github.com/go-delve/delve/service/debugger"+	"github.com/google/go-dap"+	"github.com/sirupsen/logrus"+)++// Package dap implements VSCode's Debug Adaptor Protocol (DAP).+// This allows delve to communicate with frontends using DAP+// without a separate adaptor. The frontend will run the debugger+// (which now doubles as an adaptor) in server mode listening on+// a port and communicating over TCP. This is work in progress,+// so for now Delve in dap mode only supports synchronous+// request-response communication, blocking while processing each request.+// For DAP details see https://microsoft.github.io/debug-adapter-protocol.++// Server implements a DAP server that can accept a single client for+// a single debug session. It does not support restarting.+// The server operates via two goroutines:+// (1) Main goroutine where the server is created via NewServer(),+// started via Run() and stopped via Stop().+// (2) Run goroutine started from Run() that accepts a client connection,+// reads, decodes and processes each request, issuing commands to the+// underlying debugger and sending back events and responses.+// TODO(polina): make it asynchronous (i.e. launch goroutine per request)+type Server struct {+	// config is all the information necessary to start the debugger and server.+	config *service.Config+	// listener is used to accept the client connection.+	listener net.Listener+	// conn is the accepted client connection.+	conn net.Conn+	// reader is used to read requests from the connection.+	reader *bufio.Reader+	// debugger is the underlying debugger service.+	debugger *debugger.Debugger+	// log is used for structured logging.+	log *logrus.Entry+	// stopOnEntry is set to automatically stop the debugee after start.+	stopOnEntry bool+}++// NewServer creates a new DAP Server. It takes an opened Listener+// via config and assumes its ownerhsip. Optinally takes DisconnectChan+// via config, which can be used to detect when the client disconnects+// and the server is ready to be shut down. The caller must call+// Stop() on shutdown.+func NewServer(config *service.Config) *Server {+	logger := logflags.DAPLogger()+	logflags.WriteDAPListeningMessage(config.Listener.Addr().String())+	return &Server{+		config:   config,+		listener: config.Listener,+		log:      logger,+	}+}++// Stop stops the DAP debugger service, closes the listener and+// the client connection. It shuts down the underlying debugger+// and kills the target process if it was launched by it.+func (s *Server) Stop() {+	s.listener.Close()+	if s.conn != nil {+		// Unless Stop() was called after serveDAPCodec()+		// returned, this will result in closed connection error+		// on next read, breaking out of the read loop and+		// allowing the run goroutine to exit.+		s.conn.Close()+	}+	if s.debugger != nil {+		kill := s.config.AttachPid == 0+		if err := s.debugger.Detach(kill); err != nil {+			fmt.Println(err)+		}+	}+}++// signalDisconnect closes config.DisconnectChan if not nil, which+// signals that the client disconnected or there was a client+// connection failure. Since the server currently services only one+// client, this can be used as a signal to the entire server via+// Stop(). The function safeguards agaist closing the channel more+// than once and can be called multiple times. It is not thread-safe+// and is currently only called from the run goroutine.+// TODO(polina): lock this when we add more goroutines that could call+// this when we support asynchronous request-response communication.+func (s *Server) signalDisconnect() {+	// DisconnectChan might be nil at server creation if the+	// caller does not want to rely on the disconnect signal.+	if s.config.DisconnectChan != nil {+		close(s.config.DisconnectChan)+		// Take advantage of the nil check above to avoid accidentally+		// closing the channel twice and causing a panic, when this+		// function is called more than once. For example, we could+		// have the following sequence of events:+		// -- run goroutine: calls onDisconnectRequest()+		// -- run goroutine: calls signalDisconnect()+		// -- main goroutine: calls Stop()+		// -- main goroutine: Stop() closes client connection+		// -- run goroutine: serveDAPCodec() gets "closed network connection"+		// -- run goroutine: serveDAPCodec() returns+		// -- run goroutine: serveDAPCodec calls signalDisconnect()+		s.config.DisconnectChan = nil+	}+}++// Run launches a new goroutine where it accepts a client connection+// and starts processing requests from it. Use Stop() to close connection.+// The server does not support multiple clients, serially or in parallel.+// The server should be restarted for every new debug session.+// The debugger won't be started until launch/attach request is received.+// TODO(polina): allow new client connections for new debug sessions,+// so the editor needs to launch delve only once?+func (s *Server) Run() {+	go func() {+		conn, err := s.listener.Accept()+		if err != nil {+			// This will print if the server is killed with Ctrl+C+			// before client connection is accepted.+			fmt.Printf("Error accepting client connection: %s\n", err)+			s.signalDisconnect()+			return+		}+		s.conn = conn+		s.serveDAPCodec()+	}()+}++// serveDAPCodec reads and decodes requests from the client+// until it encounters an error or EOF, when it sends+// the disconnect signal and returns.+func (s *Server) serveDAPCodec() {+	defer s.signalDisconnect()+	s.reader = bufio.NewReader(s.conn)+	for {+		request, err := dap.ReadProtocolMessage(s.reader)+		// TODO(polina): Differentiate between errors and handle them+		// gracefully. For example,+		// -- "use of closed network connection" means client connection+		// was closed via Stop() in response to a disconnect request.+		// -- "Request command 'foo' is not supported" means we+		// potentially got some new DAP request that we do not yet have+		// decoding support for, so we can respond with an ErrorResponse.+		// TODO(polina): to support this add Seq to+		// dap.DecodeProtocolMessageFieldError.+		if err != nil {+			if err != io.EOF {+				fmt.Println("DAP error:", err)+			}+			return+		}+		s.handleRequest(request)+	}+}++func (s *Server) handleRequest(request dap.Message) {+	jsonmsg, _ := json.Marshal(request)+	s.log.Debug("[<- from client]", string(jsonmsg))++	switch request := request.(type) {+	case *dap.InitializeRequest:+		s.onInitializeRequest(request)+	case *dap.LaunchRequest:+		s.onLaunchRequest(request)+	case *dap.AttachRequest:+		s.sendUnsupportedErrorResponse(request.Request)+	case *dap.DisconnectRequest:+		s.onDisconnectRequest(request)+	case *dap.TerminateRequest:+		s.sendUnsupportedErrorResponse(request.Request)+	case *dap.RestartRequest:+		s.sendUnsupportedErrorResponse(request.Request)+	case *dap.SetBreakpointsRequest:+		s.onSetBreakpointsRequest(request)+	case *dap.SetFunctionBreakpointsRequest:+		s.sendUnsupportedErrorResponse(request.Request)+	case *dap.SetExceptionBreakpointsRequest:+		s.onSetExceptionBreakpointsRequest(request)+	case *dap.ConfigurationDoneRequest:+		s.onConfigurationDoneRequest(request)+	case *dap.ContinueRequest:+		s.onContinueRequest(request)+	case *dap.NextRequest:+		s.sendUnsupportedErrorResponse(request.Request)+	case *dap.StepInRequest:+		s.sendUnsupportedErrorResponse(request.Request)+	case *dap.StepOutRequest:+		s.sendUnsupportedErrorResponse(request.Request)+	case *dap.StepBackRequest:+		s.sendUnsupportedErrorResponse(request.Request)+	case *dap.ReverseContinueRequest:+		s.sendUnsupportedErrorResponse(request.Request)+	case *dap.RestartFrameRequest:+		s.sendUnsupportedErrorResponse(request.Request)+	case *dap.GotoRequest:+		s.sendUnsupportedErrorResponse(request.Request)+	case *dap.PauseRequest:+		s.sendUnsupportedErrorResponse(request.Request)+	case *dap.StackTraceRequest:+		s.sendUnsupportedErrorResponse(request.Request)+	case *dap.ScopesRequest:+		s.sendUnsupportedErrorResponse(request.Request)+	case *dap.VariablesRequest:+		s.sendUnsupportedErrorResponse(request.Request)+	case *dap.SetVariableRequest:+		s.sendUnsupportedErrorResponse(request.Request)+	case *dap.SetExpressionRequest:+		s.sendUnsupportedErrorResponse(request.Request)+	case *dap.SourceRequest:+		s.sendUnsupportedErrorResponse(request.Request)+	case *dap.ThreadsRequest:+		s.sendUnsupportedErrorResponse(request.Request)+	case *dap.TerminateThreadsRequest:+		s.sendUnsupportedErrorResponse(request.Request)+	case *dap.EvaluateRequest:+		s.sendUnsupportedErrorResponse(request.Request)+	case *dap.StepInTargetsRequest:+		s.sendUnsupportedErrorResponse(request.Request)+	case *dap.GotoTargetsRequest:+		s.sendUnsupportedErrorResponse(request.Request)+	case *dap.CompletionsRequest:+		s.sendUnsupportedErrorResponse(request.Request)+	case *dap.ExceptionInfoRequest:+		s.sendUnsupportedErrorResponse(request.Request)+	case *dap.LoadedSourcesRequest:+		s.sendUnsupportedErrorResponse(request.Request)+	case *dap.DataBreakpointInfoRequest:+		s.sendUnsupportedErrorResponse(request.Request)+	case *dap.SetDataBreakpointsRequest:+		s.sendUnsupportedErrorResponse(request.Request)+	case *dap.ReadMemoryRequest:+		s.sendUnsupportedErrorResponse(request.Request)+	case *dap.DisassembleRequest:+		s.sendUnsupportedErrorResponse(request.Request)+	case *dap.CancelRequest:+		s.sendUnsupportedErrorResponse(request.Request)+	case *dap.BreakpointLocationsRequest:+		s.sendUnsupportedErrorResponse(request.Request)+	default:+		// This is a DAP message that go-dap has a struct for, so+		// decoding succeeded, but this function does not know how+		// to handle. We should be sending an ErrorResponse, but+		// we cannot get to Seq and other fields from dap.Message.+		// TODO(polina): figure out how to handle this better.+		// Consider adding GetSeq() method to dap.Message interface.+		panic(fmt.Sprintf("Unable to process %#v", request))+	}+}++func (s *Server) send(message dap.Message) {+	jsonmsg, _ := json.Marshal(message)+	s.log.Debug("[-> to client]", string(jsonmsg))+	dap.WriteProtocolMessage(s.conn, message)+}++func (s *Server) onInitializeRequest(request *dap.InitializeRequest) {+	// TODO(polina): Respond with an error if debug session is in progress?+	response := &dap.InitializeResponse{Response: *newResponse(request.Request)}+	response.Body.SupportsConfigurationDoneRequest = true+	// TODO(polina): support this to match vscode-go functionality+	response.Body.SupportsSetVariable = false+	s.send(response)+}++func (s *Server) onLaunchRequest(request *dap.LaunchRequest) {+	// TODO(polina): Respond with an error if debug session is in progress?+	program, ok := request.Arguments["program"]+	if !ok || program == "" {+		s.sendErrorResponse(request.Request,+			3000, "Failed to launch",+			"The program attribute is missing in debug configuration.")+		return+	}+	s.config.ProcessArgs = []string{program.(string)}+	s.config.WorkingDir = filepath.Dir(program.(string))+	// TODO: support program args++	stop, ok := request.Arguments["stopOnEntry"]+	s.stopOnEntry = (ok && stop == true)++	mode, ok := request.Arguments["mode"]+	if !ok || mode == "" {+		mode = "debug"+	}+	// TODO(polina): support "debug", "test" and "remote" modes+	if mode != "exec" {+		s.sendErrorResponse(request.Request,+			3000, "Failed to launch",+			fmt.Sprintf("Unsupported 'mode' value '%s' in debug configuration.", mode))+		return+	}++	config := &debugger.Config{+		WorkingDir:           s.config.WorkingDir,+		AttachPid:            0,+		CoreFile:             "",+		Backend:              s.config.Backend,+		Foreground:           s.config.Foreground,+		DebugInfoDirectories: s.config.DebugInfoDirectories,+		CheckGoVersion:       s.config.CheckGoVersion,+	}+	var err error+	if s.debugger, err = debugger.New(config, s.config.ProcessArgs); err != nil {+		s.sendErrorResponse(request.Request,+			3000, "Failed to launch", err.Error())

3000 should be a named constant.

polinasok

comment created time in 5 days

Pull request review commentgo-delve/delve

service/dap: Initial implementation for 'dlv dap'

+package dap++import (+	"bufio"+	"encoding/json"+	"fmt"+	"io"+	"net"+	"path/filepath"++	"github.com/go-delve/delve/pkg/logflags"+	"github.com/go-delve/delve/pkg/proc"+	"github.com/go-delve/delve/service"+	"github.com/go-delve/delve/service/api"+	"github.com/go-delve/delve/service/debugger"+	"github.com/google/go-dap"+	"github.com/sirupsen/logrus"+)++// Package dap implements VSCode's Debug Adaptor Protocol (DAP).+// This allows delve to communicate with frontends using DAP+// without a separate adaptor. The frontend will run the debugger+// (which now doubles as an adaptor) in server mode listening on+// a port and communicating over TCP. This is work in progress,+// so for now Delve in dap mode only supports synchronous+// request-response communication, blocking while processing each request.+// For DAP details see https://microsoft.github.io/debug-adapter-protocol.++// Server implements a DAP server that can accept a single client for+// a single debug session. It does not support restarting.+// The server operates via two goroutines:+// (1) Main goroutine where the server is created via NewServer(),+// started via Run() and stopped via Stop().+// (2) Run goroutine started from Run() that accepts a client connection,+// reads, decodes and processes each request, issuing commands to the+// underlying debugger and sending back events and responses.+// TODO(polina): make it asynchronous (i.e. launch goroutine per request)+type Server struct {+	// config is all the information necessary to start the debugger and server.+	config *service.Config+	// listener is used to accept the client connection.+	listener net.Listener+	// conn is the accepted client connection.+	conn net.Conn+	// reader is used to read requests from the connection.+	reader *bufio.Reader+	// debugger is the underlying debugger service.+	debugger *debugger.Debugger+	// log is used for structured logging.+	log *logrus.Entry+	// stopOnEntry is set to automatically stop the debugee after start.+	stopOnEntry bool+}++// NewServer creates a new DAP Server. It takes an opened Listener+// via config and assumes its ownerhsip. Optinally takes DisconnectChan+// via config, which can be used to detect when the client disconnects+// and the server is ready to be shut down. The caller must call+// Stop() on shutdown.+func NewServer(config *service.Config) *Server {+	logger := logflags.DAPLogger()+	logflags.WriteDAPListeningMessage(config.Listener.Addr().String())+	return &Server{+		config:   config,+		listener: config.Listener,+		log:      logger,+	}+}++// Stop stops the DAP debugger service, closes the listener and+// the client connection. It shuts down the underlying debugger+// and kills the target process if it was launched by it.+func (s *Server) Stop() {+	s.listener.Close()+	if s.conn != nil {+		// Unless Stop() was called after serveDAPCodec()+		// returned, this will result in closed connection error+		// on next read, breaking out of the read loop and+		// allowing the run goroutine to exit.+		s.conn.Close()+	}+	if s.debugger != nil {+		kill := s.config.AttachPid == 0+		if err := s.debugger.Detach(kill); err != nil {+			fmt.Println(err)+		}+	}+}++// signalDisconnect closes config.DisconnectChan if not nil, which+// signals that the client disconnected or there was a client+// connection failure. Since the server currently services only one+// client, this can be used as a signal to the entire server via+// Stop(). The function safeguards agaist closing the channel more+// than once and can be called multiple times. It is not thread-safe+// and is currently only called from the run goroutine.+// TODO(polina): lock this when we add more goroutines that could call+// this when we support asynchronous request-response communication.+func (s *Server) signalDisconnect() {+	// DisconnectChan might be nil at server creation if the+	// caller does not want to rely on the disconnect signal.+	if s.config.DisconnectChan != nil {+		close(s.config.DisconnectChan)+		// Take advantage of the nil check above to avoid accidentally+		// closing the channel twice and causing a panic, when this+		// function is called more than once. For example, we could+		// have the following sequence of events:+		// -- run goroutine: calls onDisconnectRequest()+		// -- run goroutine: calls signalDisconnect()+		// -- main goroutine: calls Stop()+		// -- main goroutine: Stop() closes client connection+		// -- run goroutine: serveDAPCodec() gets "closed network connection"+		// -- run goroutine: serveDAPCodec() returns+		// -- run goroutine: serveDAPCodec calls signalDisconnect()+		s.config.DisconnectChan = nil+	}+}++// Run launches a new goroutine where it accepts a client connection+// and starts processing requests from it. Use Stop() to close connection.+// The server does not support multiple clients, serially or in parallel.+// The server should be restarted for every new debug session.+// The debugger won't be started until launch/attach request is received.+// TODO(polina): allow new client connections for new debug sessions,+// so the editor needs to launch delve only once?+func (s *Server) Run() {+	go func() {+		conn, err := s.listener.Accept()+		if err != nil {+			// This will print if the server is killed with Ctrl+C+			// before client connection is accepted.+			fmt.Printf("Error accepting client connection: %s\n", err)+			s.signalDisconnect()+			return+		}+		s.conn = conn+		s.serveDAPCodec()+	}()+}++// serveDAPCodec reads and decodes requests from the client+// until it encounters an error or EOF, when it sends+// the disconnect signal and returns.+func (s *Server) serveDAPCodec() {+	defer s.signalDisconnect()+	s.reader = bufio.NewReader(s.conn)+	for {+		request, err := dap.ReadProtocolMessage(s.reader)+		// TODO(polina): Differentiate between errors and handle them+		// gracefully. For example,+		// -- "use of closed network connection" means client connection+		// was closed via Stop() in response to a disconnect request.+		// -- "Request command 'foo' is not supported" means we+		// potentially got some new DAP request that we do not yet have+		// decoding support for, so we can respond with an ErrorResponse.+		// TODO(polina): to support this add Seq to+		// dap.DecodeProtocolMessageFieldError.+		if err != nil {+			if err != io.EOF {+				fmt.Println("DAP error:", err)+			}+			return+		}+		s.handleRequest(request)+	}+}++func (s *Server) handleRequest(request dap.Message) {+	jsonmsg, _ := json.Marshal(request)+	s.log.Debug("[<- from client]", string(jsonmsg))++	switch request := request.(type) {+	case *dap.InitializeRequest:+		s.onInitializeRequest(request)+	case *dap.LaunchRequest:+		s.onLaunchRequest(request)+	case *dap.AttachRequest:+		s.sendUnsupportedErrorResponse(request.Request)+	case *dap.DisconnectRequest:+		s.onDisconnectRequest(request)+	case *dap.TerminateRequest:+		s.sendUnsupportedErrorResponse(request.Request)+	case *dap.RestartRequest:+		s.sendUnsupportedErrorResponse(request.Request)+	case *dap.SetBreakpointsRequest:+		s.onSetBreakpointsRequest(request)+	case *dap.SetFunctionBreakpointsRequest:+		s.sendUnsupportedErrorResponse(request.Request)+	case *dap.SetExceptionBreakpointsRequest:+		s.onSetExceptionBreakpointsRequest(request)+	case *dap.ConfigurationDoneRequest:+		s.onConfigurationDoneRequest(request)+	case *dap.ContinueRequest:+		s.onContinueRequest(request)+	case *dap.NextRequest:+		s.sendUnsupportedErrorResponse(request.Request)+	case *dap.StepInRequest:+		s.sendUnsupportedErrorResponse(request.Request)+	case *dap.StepOutRequest:+		s.sendUnsupportedErrorResponse(request.Request)+	case *dap.StepBackRequest:+		s.sendUnsupportedErrorResponse(request.Request)+	case *dap.ReverseContinueRequest:+		s.sendUnsupportedErrorResponse(request.Request)+	case *dap.RestartFrameRequest:+		s.sendUnsupportedErrorResponse(request.Request)+	case *dap.GotoRequest:+		s.sendUnsupportedErrorResponse(request.Request)+	case *dap.PauseRequest:+		s.sendUnsupportedErrorResponse(request.Request)+	case *dap.StackTraceRequest:+		s.sendUnsupportedErrorResponse(request.Request)+	case *dap.ScopesRequest:+		s.sendUnsupportedErrorResponse(request.Request)+	case *dap.VariablesRequest:+		s.sendUnsupportedErrorResponse(request.Request)+	case *dap.SetVariableRequest:+		s.sendUnsupportedErrorResponse(request.Request)+	case *dap.SetExpressionRequest:+		s.sendUnsupportedErrorResponse(request.Request)+	case *dap.SourceRequest:+		s.sendUnsupportedErrorResponse(request.Request)+	case *dap.ThreadsRequest:+		s.sendUnsupportedErrorResponse(request.Request)+	case *dap.TerminateThreadsRequest:+		s.sendUnsupportedErrorResponse(request.Request)+	case *dap.EvaluateRequest:+		s.sendUnsupportedErrorResponse(request.Request)+	case *dap.StepInTargetsRequest:+		s.sendUnsupportedErrorResponse(request.Request)+	case *dap.GotoTargetsRequest:+		s.sendUnsupportedErrorResponse(request.Request)+	case *dap.CompletionsRequest:+		s.sendUnsupportedErrorResponse(request.Request)+	case *dap.ExceptionInfoRequest:+		s.sendUnsupportedErrorResponse(request.Request)+	case *dap.LoadedSourcesRequest:+		s.sendUnsupportedErrorResponse(request.Request)+	case *dap.DataBreakpointInfoRequest:+		s.sendUnsupportedErrorResponse(request.Request)+	case *dap.SetDataBreakpointsRequest:+		s.sendUnsupportedErrorResponse(request.Request)+	case *dap.ReadMemoryRequest:+		s.sendUnsupportedErrorResponse(request.Request)+	case *dap.DisassembleRequest:+		s.sendUnsupportedErrorResponse(request.Request)+	case *dap.CancelRequest:+		s.sendUnsupportedErrorResponse(request.Request)+	case *dap.BreakpointLocationsRequest:+		s.sendUnsupportedErrorResponse(request.Request)+	default:+		// This is a DAP message that go-dap has a struct for, so+		// decoding succeeded, but this function does not know how+		// to handle. We should be sending an ErrorResponse, but+		// we cannot get to Seq and other fields from dap.Message.+		// TODO(polina): figure out how to handle this better.+		// Consider adding GetSeq() method to dap.Message interface.+		panic(fmt.Sprintf("Unable to process %#v", request))+	}+}++func (s *Server) send(message dap.Message) {+	jsonmsg, _ := json.Marshal(message)+	s.log.Debug("[-> to client]", string(jsonmsg))+	dap.WriteProtocolMessage(s.conn, message)+}++func (s *Server) onInitializeRequest(request *dap.InitializeRequest) {+	// TODO(polina): Respond with an error if debug session is in progress?+	response := &dap.InitializeResponse{Response: *newResponse(request.Request)}+	response.Body.SupportsConfigurationDoneRequest = true+	// TODO(polina): support this to match vscode-go functionality+	response.Body.SupportsSetVariable = false+	s.send(response)+}++func (s *Server) onLaunchRequest(request *dap.LaunchRequest) {+	// TODO(polina): Respond with an error if debug session is in progress?+	program, ok := request.Arguments["program"]+	if !ok || program == "" {+		s.sendErrorResponse(request.Request,+			3000, "Failed to launch",+			"The program attribute is missing in debug configuration.")+		return+	}+	s.config.ProcessArgs = []string{program.(string)}+	s.config.WorkingDir = filepath.Dir(program.(string))+	// TODO: support program args++	stop, ok := request.Arguments["stopOnEntry"]+	s.stopOnEntry = (ok && stop == true)++	mode, ok := request.Arguments["mode"]+	if !ok || mode == "" {+		mode = "debug"+	}+	// TODO(polina): support "debug", "test" and "remote" modes+	if mode != "exec" {+		s.sendErrorResponse(request.Request,+			3000, "Failed to launch",+			fmt.Sprintf("Unsupported 'mode' value '%s' in debug configuration.", mode))+		return+	}++	config := &debugger.Config{+		WorkingDir:           s.config.WorkingDir,+		AttachPid:            0,+		CoreFile:             "",+		Backend:              s.config.Backend,+		Foreground:           s.config.Foreground,+		DebugInfoDirectories: s.config.DebugInfoDirectories,+		CheckGoVersion:       s.config.CheckGoVersion,+	}+	var err error+	if s.debugger, err = debugger.New(config, s.config.ProcessArgs); err != nil {+		s.sendErrorResponse(request.Request,+			3000, "Failed to launch", err.Error())+		return+	}++	// Notify the client that the debugger is ready to start accepting+	// configuration requests for setting breakpoints, etc. The client+	// will end the configuration sequence with 'configurationDone'.+	s.send(&dap.InitializedEvent{Event: *newEvent("initialized")})+	s.send(&dap.LaunchResponse{Response: *newResponse(request.Request)})+}++// onDisconnectRequest handles the DisconnectRequest. Per the DAP spec,+// it disconnects the debuggee and signals that the debug adaptor+// (in our case this TCP server) can be terminated.+func (s *Server) onDisconnectRequest(request *dap.DisconnectRequest) {+	s.send(&dap.DisconnectResponse{Response: *newResponse(request.Request)})+	// TODO(polina): only halt if the program is running

I don't think it matters (but I haven't checked).

polinasok

comment created time in 5 days

Pull request review commentgo-delve/delve

service/dap: Initial implementation for 'dlv dap'

+package dap++import (+	"bufio"+	"encoding/json"+	"fmt"+	"io"+	"net"+	"path/filepath"++	"github.com/go-delve/delve/pkg/logflags"+	"github.com/go-delve/delve/pkg/proc"+	"github.com/go-delve/delve/service"+	"github.com/go-delve/delve/service/api"+	"github.com/go-delve/delve/service/debugger"+	"github.com/google/go-dap"+	"github.com/sirupsen/logrus"+)++// Package dap implements VSCode's Debug Adaptor Protocol (DAP).+// This allows delve to communicate with frontends using DAP+// without a separate adaptor. The frontend will run the debugger+// (which now doubles as an adaptor) in server mode listening on+// a port and communicating over TCP. This is work in progress,+// so for now Delve in dap mode only supports synchronous+// request-response communication, blocking while processing each request.+// For DAP details see https://microsoft.github.io/debug-adapter-protocol.++// Server implements a DAP server that can accept a single client for+// a single debug session. It does not support restarting.+// The server operates via two goroutines:+// (1) Main goroutine where the server is created via NewServer(),+// started via Run() and stopped via Stop().+// (2) Run goroutine started from Run() that accepts a client connection,+// reads, decodes and processes each request, issuing commands to the+// underlying debugger and sending back events and responses.+// TODO(polina): make it asynchronous (i.e. launch goroutine per request)+type Server struct {+	// config is all the information necessary to start the debugger and server.+	config *service.Config+	// listener is used to accept the client connection.+	listener net.Listener+	// conn is the accepted client connection.+	conn net.Conn+	// reader is used to read requests from the connection.+	reader *bufio.Reader+	// debugger is the underlying debugger service.+	debugger *debugger.Debugger+	// log is used for structured logging.+	log *logrus.Entry+	// stopOnEntry is set to automatically stop the debugee after start.+	stopOnEntry bool+}++// NewServer creates a new DAP Server. It takes an opened Listener+// via config and assumes its ownerhsip. Optinally takes DisconnectChan+// via config, which can be used to detect when the client disconnects+// and the server is ready to be shut down. The caller must call+// Stop() on shutdown.+func NewServer(config *service.Config) *Server {+	logger := logflags.DAPLogger()+	logflags.WriteDAPListeningMessage(config.Listener.Addr().String())+	return &Server{+		config:   config,+		listener: config.Listener,+		log:      logger,+	}+}++// Stop stops the DAP debugger service, closes the listener and+// the client connection. It shuts down the underlying debugger+// and kills the target process if it was launched by it.+func (s *Server) Stop() {+	s.listener.Close()+	if s.conn != nil {+		// Unless Stop() was called after serveDAPCodec()+		// returned, this will result in closed connection error+		// on next read, breaking out of the read loop and+		// allowing the run goroutine to exit.+		s.conn.Close()+	}+	if s.debugger != nil {+		kill := s.config.AttachPid == 0+		if err := s.debugger.Detach(kill); err != nil {+			fmt.Println(err)+		}+	}+}++// signalDisconnect closes config.DisconnectChan if not nil, which+// signals that the client disconnected or there was a client+// connection failure. Since the server currently services only one+// client, this can be used as a signal to the entire server via+// Stop(). The function safeguards agaist closing the channel more+// than once and can be called multiple times. It is not thread-safe+// and is currently only called from the run goroutine.+// TODO(polina): lock this when we add more goroutines that could call+// this when we support asynchronous request-response communication.+func (s *Server) signalDisconnect() {+	// DisconnectChan might be nil at server creation if the+	// caller does not want to rely on the disconnect signal.+	if s.config.DisconnectChan != nil {+		close(s.config.DisconnectChan)+		// Take advantage of the nil check above to avoid accidentally+		// closing the channel twice and causing a panic, when this+		// function is called more than once. For example, we could+		// have the following sequence of events:+		// -- run goroutine: calls onDisconnectRequest()+		// -- run goroutine: calls signalDisconnect()+		// -- main goroutine: calls Stop()+		// -- main goroutine: Stop() closes client connection+		// -- run goroutine: serveDAPCodec() gets "closed network connection"+		// -- run goroutine: serveDAPCodec() returns+		// -- run goroutine: serveDAPCodec calls signalDisconnect()+		s.config.DisconnectChan = nil+	}+}++// Run launches a new goroutine where it accepts a client connection+// and starts processing requests from it. Use Stop() to close connection.+// The server does not support multiple clients, serially or in parallel.+// The server should be restarted for every new debug session.+// The debugger won't be started until launch/attach request is received.+// TODO(polina): allow new client connections for new debug sessions,+// so the editor needs to launch delve only once?+func (s *Server) Run() {+	go func() {+		conn, err := s.listener.Accept()+		if err != nil {+			// This will print if the server is killed with Ctrl+C+			// before client connection is accepted.+			fmt.Printf("Error accepting client connection: %s\n", err)+			s.signalDisconnect()+			return+		}+		s.conn = conn+		s.serveDAPCodec()+	}()+}++// serveDAPCodec reads and decodes requests from the client+// until it encounters an error or EOF, when it sends+// the disconnect signal and returns.+func (s *Server) serveDAPCodec() {+	defer s.signalDisconnect()+	s.reader = bufio.NewReader(s.conn)+	for {+		request, err := dap.ReadProtocolMessage(s.reader)+		// TODO(polina): Differentiate between errors and handle them+		// gracefully. For example,+		// -- "use of closed network connection" means client connection+		// was closed via Stop() in response to a disconnect request.+		// -- "Request command 'foo' is not supported" means we+		// potentially got some new DAP request that we do not yet have+		// decoding support for, so we can respond with an ErrorResponse.+		// TODO(polina): to support this add Seq to+		// dap.DecodeProtocolMessageFieldError.+		if err != nil {+			if err != io.EOF {+				fmt.Println("DAP error:", err)+			}+			return+		}+		s.handleRequest(request)+	}+}++func (s *Server) handleRequest(request dap.Message) {+	jsonmsg, _ := json.Marshal(request)+	s.log.Debug("[<- from client]", string(jsonmsg))++	switch request := request.(type) {+	case *dap.InitializeRequest:+		s.onInitializeRequest(request)+	case *dap.LaunchRequest:+		s.onLaunchRequest(request)+	case *dap.AttachRequest:+		s.sendUnsupportedErrorResponse(request.Request)+	case *dap.DisconnectRequest:+		s.onDisconnectRequest(request)+	case *dap.TerminateRequest:+		s.sendUnsupportedErrorResponse(request.Request)+	case *dap.RestartRequest:+		s.sendUnsupportedErrorResponse(request.Request)+	case *dap.SetBreakpointsRequest:+		s.onSetBreakpointsRequest(request)+	case *dap.SetFunctionBreakpointsRequest:+		s.sendUnsupportedErrorResponse(request.Request)+	case *dap.SetExceptionBreakpointsRequest:+		s.onSetExceptionBreakpointsRequest(request)+	case *dap.ConfigurationDoneRequest:+		s.onConfigurationDoneRequest(request)+	case *dap.ContinueRequest:+		s.onContinueRequest(request)+	case *dap.NextRequest:+		s.sendUnsupportedErrorResponse(request.Request)+	case *dap.StepInRequest:+		s.sendUnsupportedErrorResponse(request.Request)+	case *dap.StepOutRequest:+		s.sendUnsupportedErrorResponse(request.Request)+	case *dap.StepBackRequest:+		s.sendUnsupportedErrorResponse(request.Request)+	case *dap.ReverseContinueRequest:+		s.sendUnsupportedErrorResponse(request.Request)+	case *dap.RestartFrameRequest:+		s.sendUnsupportedErrorResponse(request.Request)+	case *dap.GotoRequest:+		s.sendUnsupportedErrorResponse(request.Request)+	case *dap.PauseRequest:+		s.sendUnsupportedErrorResponse(request.Request)+	case *dap.StackTraceRequest:+		s.sendUnsupportedErrorResponse(request.Request)+	case *dap.ScopesRequest:+		s.sendUnsupportedErrorResponse(request.Request)+	case *dap.VariablesRequest:+		s.sendUnsupportedErrorResponse(request.Request)+	case *dap.SetVariableRequest:+		s.sendUnsupportedErrorResponse(request.Request)+	case *dap.SetExpressionRequest:+		s.sendUnsupportedErrorResponse(request.Request)+	case *dap.SourceRequest:+		s.sendUnsupportedErrorResponse(request.Request)+	case *dap.ThreadsRequest:+		s.sendUnsupportedErrorResponse(request.Request)+	case *dap.TerminateThreadsRequest:+		s.sendUnsupportedErrorResponse(request.Request)+	case *dap.EvaluateRequest:+		s.sendUnsupportedErrorResponse(request.Request)+	case *dap.StepInTargetsRequest:+		s.sendUnsupportedErrorResponse(request.Request)+	case *dap.GotoTargetsRequest:+		s.sendUnsupportedErrorResponse(request.Request)+	case *dap.CompletionsRequest:+		s.sendUnsupportedErrorResponse(request.Request)+	case *dap.ExceptionInfoRequest:+		s.sendUnsupportedErrorResponse(request.Request)+	case *dap.LoadedSourcesRequest:+		s.sendUnsupportedErrorResponse(request.Request)+	case *dap.DataBreakpointInfoRequest:+		s.sendUnsupportedErrorResponse(request.Request)+	case *dap.SetDataBreakpointsRequest:+		s.sendUnsupportedErrorResponse(request.Request)+	case *dap.ReadMemoryRequest:+		s.sendUnsupportedErrorResponse(request.Request)+	case *dap.DisassembleRequest:+		s.sendUnsupportedErrorResponse(request.Request)+	case *dap.CancelRequest:+		s.sendUnsupportedErrorResponse(request.Request)+	case *dap.BreakpointLocationsRequest:+		s.sendUnsupportedErrorResponse(request.Request)+	default:+		// This is a DAP message that go-dap has a struct for, so+		// decoding succeeded, but this function does not know how+		// to handle. We should be sending an ErrorResponse, but+		// we cannot get to Seq and other fields from dap.Message.+		// TODO(polina): figure out how to handle this better.+		// Consider adding GetSeq() method to dap.Message interface.+		panic(fmt.Sprintf("Unable to process %#v", request))+	}+}++func (s *Server) send(message dap.Message) {+	jsonmsg, _ := json.Marshal(message)+	s.log.Debug("[-> to client]", string(jsonmsg))+	dap.WriteProtocolMessage(s.conn, message)+}++func (s *Server) onInitializeRequest(request *dap.InitializeRequest) {+	// TODO(polina): Respond with an error if debug session is in progress?+	response := &dap.InitializeResponse{Response: *newResponse(request.Request)}+	response.Body.SupportsConfigurationDoneRequest = true+	// TODO(polina): support this to match vscode-go functionality+	response.Body.SupportsSetVariable = false+	s.send(response)+}++func (s *Server) onLaunchRequest(request *dap.LaunchRequest) {+	// TODO(polina): Respond with an error if debug session is in progress?+	program, ok := request.Arguments["program"]+	if !ok || program == "" {+		s.sendErrorResponse(request.Request,+			3000, "Failed to launch",+			"The program attribute is missing in debug configuration.")+		return+	}+	s.config.ProcessArgs = []string{program.(string)}+	s.config.WorkingDir = filepath.Dir(program.(string))+	// TODO: support program args++	stop, ok := request.Arguments["stopOnEntry"]+	s.stopOnEntry = (ok && stop == true)++	mode, ok := request.Arguments["mode"]+	if !ok || mode == "" {+		mode = "debug"+	}+	// TODO(polina): support "debug", "test" and "remote" modes+	if mode != "exec" {+		s.sendErrorResponse(request.Request,+			3000, "Failed to launch",+			fmt.Sprintf("Unsupported 'mode' value '%s' in debug configuration.", mode))+		return+	}++	config := &debugger.Config{+		WorkingDir:           s.config.WorkingDir,+		AttachPid:            0,+		CoreFile:             "",+		Backend:              s.config.Backend,+		Foreground:           s.config.Foreground,+		DebugInfoDirectories: s.config.DebugInfoDirectories,+		CheckGoVersion:       s.config.CheckGoVersion,+	}+	var err error+	if s.debugger, err = debugger.New(config, s.config.ProcessArgs); err != nil {+		s.sendErrorResponse(request.Request,+			3000, "Failed to launch", err.Error())+		return+	}++	// Notify the client that the debugger is ready to start accepting+	// configuration requests for setting breakpoints, etc. The client+	// will end the configuration sequence with 'configurationDone'.+	s.send(&dap.InitializedEvent{Event: *newEvent("initialized")})+	s.send(&dap.LaunchResponse{Response: *newResponse(request.Request)})+}++// onDisconnectRequest handles the DisconnectRequest. Per the DAP spec,+// it disconnects the debuggee and signals that the debug adaptor+// (in our case this TCP server) can be terminated.+func (s *Server) onDisconnectRequest(request *dap.DisconnectRequest) {+	s.send(&dap.DisconnectResponse{Response: *newResponse(request.Request)})+	// TODO(polina): only halt if the program is running+	if s.debugger != nil {+		_, err := s.debugger.Command(&api.DebuggerCommand{Name: api.Halt})+		if err != nil {+			s.log.Error(err)+		}+		kill := s.config.AttachPid == 0+		err = s.debugger.Detach(kill)+		if err != nil {+			s.log.Error(err)+		}+	}+	// TODO(polina): make thread-safe when handlers become asynchronous.+	s.signalDisconnect()+}++func (s *Server) onSetBreakpointsRequest(request *dap.SetBreakpointsRequest) {

One thing to check is if VSCode sends this request while the program is running, if it does know that in that case Debugger.CreateBreakpoint will block until the program stops.

polinasok

comment created time in 5 days

Pull request review commentgo-delve/delve

service/dap: Initial implementation for 'dlv dap'

+package dap++import (+	"bufio"+	"encoding/json"+	"fmt"+	"io"+	"net"+	"path/filepath"++	"github.com/go-delve/delve/pkg/logflags"+	"github.com/go-delve/delve/pkg/proc"+	"github.com/go-delve/delve/service"+	"github.com/go-delve/delve/service/api"+	"github.com/go-delve/delve/service/debugger"+	"github.com/google/go-dap"+	"github.com/sirupsen/logrus"+)++// Package dap implements VSCode's Debug Adaptor Protocol (DAP).+// This allows delve to communicate with frontends using DAP+// without a separate adaptor. The frontend will run the debugger+// (which now doubles as an adaptor) in server mode listening on+// a port and communicating over TCP. This is work in progress,+// so for now Delve in dap mode only supports synchronous+// request-response communication, blocking while processing each request.+// For DAP details see https://microsoft.github.io/debug-adapter-protocol.++// Server implements a DAP server that can accept a single client for+// a single debug session. It does not support restarting.+// The server operates via two goroutines:+// (1) Main goroutine where the server is created via NewServer(),+// started via Run() and stopped via Stop().+// (2) Run goroutine started from Run() that accepts a client connection,+// reads, decodes and processes each request, issuing commands to the+// underlying debugger and sending back events and responses.+// TODO(polina): make it asynchronous (i.e. launch goroutine per request)+type Server struct {+	// config is all the information necessary to start the debugger and server.+	config *service.Config+	// listener is used to accept the client connection.+	listener net.Listener+	// conn is the accepted client connection.+	conn net.Conn+	// reader is used to read requests from the connection.+	reader *bufio.Reader+	// debugger is the underlying debugger service.+	debugger *debugger.Debugger+	// log is used for structured logging.+	log *logrus.Entry+	// stopOnEntry is set to automatically stop the debugee after start.+	stopOnEntry bool+}++// NewServer creates a new DAP Server. It takes an opened Listener+// via config and assumes its ownerhsip. Optinally takes DisconnectChan+// via config, which can be used to detect when the client disconnects+// and the server is ready to be shut down. The caller must call+// Stop() on shutdown.+func NewServer(config *service.Config) *Server {+	logger := logflags.DAPLogger()+	logflags.WriteDAPListeningMessage(config.Listener.Addr().String())+	return &Server{+		config:   config,+		listener: config.Listener,+		log:      logger,+	}+}++// Stop stops the DAP debugger service, closes the listener and+// the client connection. It shuts down the underlying debugger+// and kills the target process if it was launched by it.+func (s *Server) Stop() {+	s.listener.Close()+	if s.conn != nil {+		// Unless Stop() was called after serveDAPCodec()+		// returned, this will result in closed connection error+		// on next read, breaking out of the read loop and+		// allowing the run goroutine to exit.+		s.conn.Close()+	}+	if s.debugger != nil {+		kill := s.config.AttachPid == 0+		if err := s.debugger.Detach(kill); err != nil {+			fmt.Println(err)+		}+	}+}++// signalDisconnect closes config.DisconnectChan if not nil, which+// signals that the client disconnected or there was a client+// connection failure. Since the server currently services only one+// client, this can be used as a signal to the entire server via+// Stop(). The function safeguards agaist closing the channel more+// than once and can be called multiple times. It is not thread-safe+// and is currently only called from the run goroutine.+// TODO(polina): lock this when we add more goroutines that could call+// this when we support asynchronous request-response communication.+func (s *Server) signalDisconnect() {+	// DisconnectChan might be nil at server creation if the+	// caller does not want to rely on the disconnect signal.+	if s.config.DisconnectChan != nil {+		close(s.config.DisconnectChan)+		// Take advantage of the nil check above to avoid accidentally+		// closing the channel twice and causing a panic, when this+		// function is called more than once. For example, we could+		// have the following sequence of events:+		// -- run goroutine: calls onDisconnectRequest()+		// -- run goroutine: calls signalDisconnect()+		// -- main goroutine: calls Stop()+		// -- main goroutine: Stop() closes client connection+		// -- run goroutine: serveDAPCodec() gets "closed network connection"+		// -- run goroutine: serveDAPCodec() returns+		// -- run goroutine: serveDAPCodec calls signalDisconnect()+		s.config.DisconnectChan = nil+	}+}++// Run launches a new goroutine where it accepts a client connection+// and starts processing requests from it. Use Stop() to close connection.+// The server does not support multiple clients, serially or in parallel.+// The server should be restarted for every new debug session.+// The debugger won't be started until launch/attach request is received.+// TODO(polina): allow new client connections for new debug sessions,+// so the editor needs to launch delve only once?+func (s *Server) Run() {+	go func() {+		conn, err := s.listener.Accept()+		if err != nil {+			// This will print if the server is killed with Ctrl+C+			// before client connection is accepted.+			fmt.Printf("Error accepting client connection: %s\n", err)+			s.signalDisconnect()+			return+		}+		s.conn = conn+		s.serveDAPCodec()+	}()+}++// serveDAPCodec reads and decodes requests from the client+// until it encounters an error or EOF, when it sends+// the disconnect signal and returns.+func (s *Server) serveDAPCodec() {+	defer s.signalDisconnect()+	s.reader = bufio.NewReader(s.conn)+	for {+		request, err := dap.ReadProtocolMessage(s.reader)+		// TODO(polina): Differentiate between errors and handle them+		// gracefully. For example,+		// -- "use of closed network connection" means client connection+		// was closed via Stop() in response to a disconnect request.+		// -- "Request command 'foo' is not supported" means we+		// potentially got some new DAP request that we do not yet have+		// decoding support for, so we can respond with an ErrorResponse.+		// TODO(polina): to support this add Seq to+		// dap.DecodeProtocolMessageFieldError.+		if err != nil {+			if err != io.EOF {+				fmt.Println("DAP error:", err)+			}+			return+		}+		s.handleRequest(request)+	}+}++func (s *Server) handleRequest(request dap.Message) {+	jsonmsg, _ := json.Marshal(request)+	s.log.Debug("[<- from client]", string(jsonmsg))++	switch request := request.(type) {+	case *dap.InitializeRequest:+		s.onInitializeRequest(request)+	case *dap.LaunchRequest:+		s.onLaunchRequest(request)+	case *dap.AttachRequest:+		s.sendUnsupportedErrorResponse(request.Request)+	case *dap.DisconnectRequest:+		s.onDisconnectRequest(request)+	case *dap.TerminateRequest:+		s.sendUnsupportedErrorResponse(request.Request)+	case *dap.RestartRequest:+		s.sendUnsupportedErrorResponse(request.Request)+	case *dap.SetBreakpointsRequest:+		s.onSetBreakpointsRequest(request)+	case *dap.SetFunctionBreakpointsRequest:+		s.sendUnsupportedErrorResponse(request.Request)+	case *dap.SetExceptionBreakpointsRequest:+		s.onSetExceptionBreakpointsRequest(request)+	case *dap.ConfigurationDoneRequest:+		s.onConfigurationDoneRequest(request)+	case *dap.ContinueRequest:+		s.onContinueRequest(request)+	case *dap.NextRequest:+		s.sendUnsupportedErrorResponse(request.Request)+	case *dap.StepInRequest:+		s.sendUnsupportedErrorResponse(request.Request)+	case *dap.StepOutRequest:+		s.sendUnsupportedErrorResponse(request.Request)+	case *dap.StepBackRequest:+		s.sendUnsupportedErrorResponse(request.Request)+	case *dap.ReverseContinueRequest:+		s.sendUnsupportedErrorResponse(request.Request)+	case *dap.RestartFrameRequest:+		s.sendUnsupportedErrorResponse(request.Request)+	case *dap.GotoRequest:+		s.sendUnsupportedErrorResponse(request.Request)+	case *dap.PauseRequest:+		s.sendUnsupportedErrorResponse(request.Request)+	case *dap.StackTraceRequest:+		s.sendUnsupportedErrorResponse(request.Request)+	case *dap.ScopesRequest:+		s.sendUnsupportedErrorResponse(request.Request)+	case *dap.VariablesRequest:+		s.sendUnsupportedErrorResponse(request.Request)+	case *dap.SetVariableRequest:+		s.sendUnsupportedErrorResponse(request.Request)+	case *dap.SetExpressionRequest:+		s.sendUnsupportedErrorResponse(request.Request)+	case *dap.SourceRequest:+		s.sendUnsupportedErrorResponse(request.Request)+	case *dap.ThreadsRequest:+		s.sendUnsupportedErrorResponse(request.Request)+	case *dap.TerminateThreadsRequest:+		s.sendUnsupportedErrorResponse(request.Request)+	case *dap.EvaluateRequest:+		s.sendUnsupportedErrorResponse(request.Request)+	case *dap.StepInTargetsRequest:+		s.sendUnsupportedErrorResponse(request.Request)+	case *dap.GotoTargetsRequest:+		s.sendUnsupportedErrorResponse(request.Request)+	case *dap.CompletionsRequest:+		s.sendUnsupportedErrorResponse(request.Request)+	case *dap.ExceptionInfoRequest:+		s.sendUnsupportedErrorResponse(request.Request)+	case *dap.LoadedSourcesRequest:+		s.sendUnsupportedErrorResponse(request.Request)+	case *dap.DataBreakpointInfoRequest:+		s.sendUnsupportedErrorResponse(request.Request)+	case *dap.SetDataBreakpointsRequest:+		s.sendUnsupportedErrorResponse(request.Request)+	case *dap.ReadMemoryRequest:+		s.sendUnsupportedErrorResponse(request.Request)+	case *dap.DisassembleRequest:+		s.sendUnsupportedErrorResponse(request.Request)+	case *dap.CancelRequest:+		s.sendUnsupportedErrorResponse(request.Request)+	case *dap.BreakpointLocationsRequest:+		s.sendUnsupportedErrorResponse(request.Request)+	default:+		// This is a DAP message that go-dap has a struct for, so+		// decoding succeeded, but this function does not know how+		// to handle. We should be sending an ErrorResponse, but+		// we cannot get to Seq and other fields from dap.Message.+		// TODO(polina): figure out how to handle this better.+		// Consider adding GetSeq() method to dap.Message interface.+		panic(fmt.Sprintf("Unable to process %#v", request))+	}+}++func (s *Server) send(message dap.Message) {+	jsonmsg, _ := json.Marshal(message)+	s.log.Debug("[-> to client]", string(jsonmsg))+	dap.WriteProtocolMessage(s.conn, message)+}++func (s *Server) onInitializeRequest(request *dap.InitializeRequest) {+	// TODO(polina): Respond with an error if debug session is in progress?+	response := &dap.InitializeResponse{Response: *newResponse(request.Request)}+	response.Body.SupportsConfigurationDoneRequest = true+	// TODO(polina): support this to match vscode-go functionality+	response.Body.SupportsSetVariable = false+	s.send(response)+}++func (s *Server) onLaunchRequest(request *dap.LaunchRequest) {+	// TODO(polina): Respond with an error if debug session is in progress?+	program, ok := request.Arguments["program"]+	if !ok || program == "" {+		s.sendErrorResponse(request.Request,+			3000, "Failed to launch",+			"The program attribute is missing in debug configuration.")+		return+	}+	s.config.ProcessArgs = []string{program.(string)}+	s.config.WorkingDir = filepath.Dir(program.(string))+	// TODO: support program args++	stop, ok := request.Arguments["stopOnEntry"]+	s.stopOnEntry = (ok && stop == true)++	mode, ok := request.Arguments["mode"]+	if !ok || mode == "" {+		mode = "debug"+	}+	// TODO(polina): support "debug", "test" and "remote" modes+	if mode != "exec" {+		s.sendErrorResponse(request.Request,+			3000, "Failed to launch",+			fmt.Sprintf("Unsupported 'mode' value '%s' in debug configuration.", mode))+		return+	}++	config := &debugger.Config{+		WorkingDir:           s.config.WorkingDir,+		AttachPid:            0,+		CoreFile:             "",+		Backend:              s.config.Backend,+		Foreground:           s.config.Foreground,+		DebugInfoDirectories: s.config.DebugInfoDirectories,+		CheckGoVersion:       s.config.CheckGoVersion,+	}+	var err error+	if s.debugger, err = debugger.New(config, s.config.ProcessArgs); err != nil {+		s.sendErrorResponse(request.Request,+			3000, "Failed to launch", err.Error())+		return+	}++	// Notify the client that the debugger is ready to start accepting+	// configuration requests for setting breakpoints, etc. The client+	// will end the configuration sequence with 'configurationDone'.+	s.send(&dap.InitializedEvent{Event: *newEvent("initialized")})+	s.send(&dap.LaunchResponse{Response: *newResponse(request.Request)})+}++// onDisconnectRequest handles the DisconnectRequest. Per the DAP spec,+// it disconnects the debuggee and signals that the debug adaptor+// (in our case this TCP server) can be terminated.+func (s *Server) onDisconnectRequest(request *dap.DisconnectRequest) {+	s.send(&dap.DisconnectResponse{Response: *newResponse(request.Request)})+	// TODO(polina): only halt if the program is running+	if s.debugger != nil {+		_, err := s.debugger.Command(&api.DebuggerCommand{Name: api.Halt})+		if err != nil {+			s.log.Error(err)+		}+		kill := s.config.AttachPid == 0+		err = s.debugger.Detach(kill)+		if err != nil {+			s.log.Error(err)+		}+	}+	// TODO(polina): make thread-safe when handlers become asynchronous.+	s.signalDisconnect()+}++func (s *Server) onSetBreakpointsRequest(request *dap.SetBreakpointsRequest) {+	if request.Arguments.Source.Path == "" {+		s.log.Error("ERROR: Unable to set breakpoint for empty file path")+	}+	response := &dap.SetBreakpointsResponse{Response: *newResponse(request.Request)}+	response.Body.Breakpoints = make([]dap.Breakpoint, len(request.Arguments.Breakpoints))+	// Only verified breakpoints will be set and reported back in the+	// response. All breakpoints resulting in errors (e.g. duplicates+	// or lines that do not have statements) will be skipped.+	i := 0+	for _, b := range request.Arguments.Breakpoints {+		bp, err := s.debugger.CreateBreakpoint(+			&api.Breakpoint{File: request.Arguments.Source.Path, Line: b.Line})+		if err != nil {+			s.log.Error("ERROR:", err)+			continue+		}+		response.Body.Breakpoints[i].Verified = true+		response.Body.Breakpoints[i].Line = bp.Line+		i+++	}+	response.Body.Breakpoints = response.Body.Breakpoints[:i]+	s.send(response)+}++func (s *Server) onSetExceptionBreakpointsRequest(request *dap.SetExceptionBreakpointsRequest) {+	// Unlike what DAP documentation claims, this request is always sent+	// even though we specified no filters at initializatin. Handle as no-op.+	s.send(&dap.SetExceptionBreakpointsResponse{Response: *newResponse(request.Request)})+}++func (s *Server) onConfigurationDoneRequest(request *dap.ConfigurationDoneRequest) {+	if s.stopOnEntry {+		e := &dap.StoppedEvent{+			Event: *newEvent("stopped"),+			Body:  dap.StoppedEventBody{Reason: "breakpoint", ThreadId: 1, AllThreadsStopped: true},+		}+		s.send(e)+	}+	s.send(&dap.ConfigurationDoneResponse{Response: *newResponse(request.Request)})+	if !s.stopOnEntry {+		s.doContinue()+	}+}++func (s *Server) onContinueRequest(request *dap.ContinueRequest) {+	s.send(&dap.ContinueResponse{Response: *newResponse(request.Request)})+	s.doContinue()+}++func (s *Server) sendErrorResponse(request dap.Request, id int, summary string, details string) {+	er := &dap.ErrorResponse{}+	er.Type = "response"+	er.Command = request.Command+	er.RequestSeq = request.Seq+	er.Success = false+	er.Message = summary+	er.Body.Error.Id = id+	er.Body.Error.Format = fmt.Sprintf("%s: %s", summary, details)+	s.log.Error(er.Body.Error.Format)+	s.send(er)+}++func (s *Server) sendUnsupportedErrorResponse(request dap.Request) {+	s.sendErrorResponse(request, 9999, "Unsupported command",+		fmt.Sprintf("cannot process '%s' request", request.Command))+}++func newResponse(request dap.Request) *dap.Response {+	return &dap.Response{+		ProtocolMessage: dap.ProtocolMessage{+			Seq:  0,+			Type: "response",+		},+		Command:    request.Command,+		RequestSeq: request.Seq,+		Success:    true,+	}+}++func newEvent(event string) *dap.Event {+	return &dap.Event{+		ProtocolMessage: dap.ProtocolMessage{+			Seq:  0,+			Type: "event",+		},+		Event: event,+	}+}++func (s *Server) doContinue() {+	if s.debugger == nil {+		return+	}+	state, err := s.debugger.Command(&api.DebuggerCommand{Name: api.Continue})+	if err != nil {+		s.log.Error(err)+		switch err.(type) {+		case proc.ErrProcessExited:+			e := &dap.TerminatedEvent{Event: *newEvent("terminated")}+			s.send(e)+		default:+		}+		return+	}+	if state.Exited {+		e := &dap.TerminatedEvent{Event: *newEvent("terminated")}+		s.send(e)+	} else {+		e := &dap.StoppedEvent{Event: *newEvent("stopped")}+		e.Body.Reason = "breakpoint"

This could also be pause if a halt command was sent. I don't know if it makes a difference and it will probably be hard to make this distinction (so if it doesn't matter in practice it probably isn't worth bothering with this).

polinasok

comment created time in 5 days

Pull request review commentgo-delve/delve

service/dap: Initial implementation for 'dlv dap'

+package dap++import (+	"bufio"+	"encoding/json"+	"fmt"+	"io"+	"net"+	"path/filepath"++	"github.com/go-delve/delve/pkg/logflags"+	"github.com/go-delve/delve/pkg/proc"+	"github.com/go-delve/delve/service"+	"github.com/go-delve/delve/service/api"+	"github.com/go-delve/delve/service/debugger"+	"github.com/google/go-dap"+	"github.com/sirupsen/logrus"+)++// Package dap implements VSCode's Debug Adaptor Protocol (DAP).+// This allows delve to communicate with frontends using DAP+// without a separate adaptor. The frontend will run the debugger+// (which now doubles as an adaptor) in server mode listening on+// a port and communicating over TCP. This is work in progress,+// so for now Delve in dap mode only supports synchronous+// request-response communication, blocking while processing each request.+// For DAP details see https://microsoft.github.io/debug-adapter-protocol.++// Server implements a DAP server that can accept a single client for+// a single debug session. It does not support restarting.+// The server operates via two goroutines:+// (1) Main goroutine where the server is created via NewServer(),+// started via Run() and stopped via Stop().+// (2) Run goroutine started from Run() that accepts a client connection,+// reads, decodes and processes each request, issuing commands to the+// underlying debugger and sending back events and responses.+// TODO(polina): make it asynchronous (i.e. launch goroutine per request)+type Server struct {+	// config is all the information necessary to start the debugger and server.+	config *service.Config+	// listener is used to accept the client connection.+	listener net.Listener+	// conn is the accepted client connection.+	conn net.Conn+	// reader is used to read requests from the connection.+	reader *bufio.Reader+	// debugger is the underlying debugger service.+	debugger *debugger.Debugger+	// log is used for structured logging.+	log *logrus.Entry+	// stopOnEntry is set to automatically stop the debugee after start.+	stopOnEntry bool+}++// NewServer creates a new DAP Server. It takes an opened Listener+// via config and assumes its ownerhsip. Optinally takes DisconnectChan+// via config, which can be used to detect when the client disconnects+// and the server is ready to be shut down. The caller must call+// Stop() on shutdown.+func NewServer(config *service.Config) *Server {+	logger := logflags.DAPLogger()+	logflags.WriteDAPListeningMessage(config.Listener.Addr().String())+	return &Server{+		config:   config,+		listener: config.Listener,+		log:      logger,+	}+}++// Stop stops the DAP debugger service, closes the listener and+// the client connection. It shuts down the underlying debugger+// and kills the target process if it was launched by it.+func (s *Server) Stop() {+	s.listener.Close()+	if s.conn != nil {+		// Unless Stop() was called after serveDAPCodec()+		// returned, this will result in closed connection error+		// on next read, breaking out of the read loop and+		// allowing the run goroutine to exit.+		s.conn.Close()+	}+	if s.debugger != nil {+		kill := s.config.AttachPid == 0+		if err := s.debugger.Detach(kill); err != nil {+			fmt.Println(err)+		}+	}+}++// signalDisconnect closes config.DisconnectChan if not nil, which+// signals that the client disconnected or there was a client+// connection failure. Since the server currently services only one+// client, this can be used as a signal to the entire server via+// Stop(). The function safeguards agaist closing the channel more+// than once and can be called multiple times. It is not thread-safe+// and is currently only called from the run goroutine.+// TODO(polina): lock this when we add more goroutines that could call+// this when we support asynchronous request-response communication.+func (s *Server) signalDisconnect() {+	// DisconnectChan might be nil at server creation if the+	// caller does not want to rely on the disconnect signal.+	if s.config.DisconnectChan != nil {+		close(s.config.DisconnectChan)+		// Take advantage of the nil check above to avoid accidentally+		// closing the channel twice and causing a panic, when this+		// function is called more than once. For example, we could+		// have the following sequence of events:+		// -- run goroutine: calls onDisconnectRequest()+		// -- run goroutine: calls signalDisconnect()+		// -- main goroutine: calls Stop()+		// -- main goroutine: Stop() closes client connection+		// -- run goroutine: serveDAPCodec() gets "closed network connection"+		// -- run goroutine: serveDAPCodec() returns+		// -- run goroutine: serveDAPCodec calls signalDisconnect()+		s.config.DisconnectChan = nil+	}+}++// Run launches a new goroutine where it accepts a client connection+// and starts processing requests from it. Use Stop() to close connection.+// The server does not support multiple clients, serially or in parallel.+// The server should be restarted for every new debug session.+// The debugger won't be started until launch/attach request is received.+// TODO(polina): allow new client connections for new debug sessions,+// so the editor needs to launch delve only once?+func (s *Server) Run() {+	go func() {+		conn, err := s.listener.Accept()+		if err != nil {+			// This will print if the server is killed with Ctrl+C+			// before client connection is accepted.+			fmt.Printf("Error accepting client connection: %s\n", err)+			s.signalDisconnect()+			return+		}+		s.conn = conn+		s.serveDAPCodec()+	}()+}++// serveDAPCodec reads and decodes requests from the client+// until it encounters an error or EOF, when it sends+// the disconnect signal and returns.+func (s *Server) serveDAPCodec() {+	defer s.signalDisconnect()+	s.reader = bufio.NewReader(s.conn)+	for {+		request, err := dap.ReadProtocolMessage(s.reader)+		// TODO(polina): Differentiate between errors and handle them+		// gracefully. For example,+		// -- "use of closed network connection" means client connection+		// was closed via Stop() in response to a disconnect request.+		// -- "Request command 'foo' is not supported" means we+		// potentially got some new DAP request that we do not yet have+		// decoding support for, so we can respond with an ErrorResponse.+		// TODO(polina): to support this add Seq to+		// dap.DecodeProtocolMessageFieldError.+		if err != nil {+			if err != io.EOF {+				fmt.Println("DAP error:", err)+			}+			return+		}+		s.handleRequest(request)+	}+}++func (s *Server) handleRequest(request dap.Message) {+	jsonmsg, _ := json.Marshal(request)

I think that either this function or its caller should have some kind of panic guard, like service/rpccommon/server.go does (see newInternalError and its callers). The idea is that if we have a bug and we accidentally do an out of bounds slice access it's safer to recover from that panic and return it to the client (as an internal error) than to let the panic bring down Delve, because that will also kill the user's process, which is highly disruptive.

polinasok

comment created time in 5 days

Pull request review commentgo-delve/delve

service/dap: Initial implementation for 'dlv dap'

+package dap++import (+	"bufio"+	"encoding/json"+	"fmt"+	"io"+	"net"+	"path/filepath"++	"github.com/go-delve/delve/pkg/logflags"+	"github.com/go-delve/delve/pkg/proc"+	"github.com/go-delve/delve/service"+	"github.com/go-delve/delve/service/api"+	"github.com/go-delve/delve/service/debugger"+	"github.com/google/go-dap"+	"github.com/sirupsen/logrus"+)++// Package dap implements VSCode's Debug Adaptor Protocol (DAP).+// This allows delve to communicate with frontends using DAP+// without a separate adaptor. The frontend will run the debugger+// (which now doubles as an adaptor) in server mode listening on+// a port and communicating over TCP. This is work in progress,+// so for now Delve in dap mode only supports synchronous+// request-response communication, blocking while processing each request.+// For DAP details see https://microsoft.github.io/debug-adapter-protocol.++// Server implements a DAP server that can accept a single client for+// a single debug session. It does not support restarting.+// The server operates via two goroutines:+// (1) Main goroutine where the server is created via NewServer(),+// started via Run() and stopped via Stop().+// (2) Run goroutine started from Run() that accepts a client connection,+// reads, decodes and processes each request, issuing commands to the+// underlying debugger and sending back events and responses.+// TODO(polina): make it asynchronous (i.e. launch goroutine per request)+type Server struct {+	// config is all the information necessary to start the debugger and server.+	config *service.Config+	// listener is used to accept the client connection.+	listener net.Listener+	// conn is the accepted client connection.+	conn net.Conn+	// reader is used to read requests from the connection.+	reader *bufio.Reader+	// debugger is the underlying debugger service.+	debugger *debugger.Debugger+	// log is used for structured logging.+	log *logrus.Entry+	// stopOnEntry is set to automatically stop the debugee after start.+	stopOnEntry bool+}++// NewServer creates a new DAP Server. It takes an opened Listener+// via config and assumes its ownerhsip. Optinally takes DisconnectChan+// via config, which can be used to detect when the client disconnects+// and the server is ready to be shut down. The caller must call+// Stop() on shutdown.+func NewServer(config *service.Config) *Server {+	logger := logflags.DAPLogger()+	logflags.WriteDAPListeningMessage(config.Listener.Addr().String())+	return &Server{+		config:   config,+		listener: config.Listener,+		log:      logger,+	}+}++// Stop stops the DAP debugger service, closes the listener and+// the client connection. It shuts down the underlying debugger+// and kills the target process if it was launched by it.+func (s *Server) Stop() {+	s.listener.Close()+	if s.conn != nil {+		// Unless Stop() was called after serveDAPCodec()+		// returned, this will result in closed connection error+		// on next read, breaking out of the read loop and+		// allowing the run goroutine to exit.+		s.conn.Close()+	}+	if s.debugger != nil {+		kill := s.config.AttachPid == 0+		if err := s.debugger.Detach(kill); err != nil {+			fmt.Println(err)+		}+	}+}++// signalDisconnect closes config.DisconnectChan if not nil, which+// signals that the client disconnected or there was a client+// connection failure. Since the server currently services only one+// client, this can be used as a signal to the entire server via+// Stop(). The function safeguards agaist closing the channel more+// than once and can be called multiple times. It is not thread-safe+// and is currently only called from the run goroutine.+// TODO(polina): lock this when we add more goroutines that could call+// this when we support asynchronous request-response communication.+func (s *Server) signalDisconnect() {+	// DisconnectChan might be nil at server creation if the+	// caller does not want to rely on the disconnect signal.+	if s.config.DisconnectChan != nil {+		close(s.config.DisconnectChan)+		// Take advantage of the nil check above to avoid accidentally+		// closing the channel twice and causing a panic, when this+		// function is called more than once. For example, we could+		// have the following sequence of events:+		// -- run goroutine: calls onDisconnectRequest()+		// -- run goroutine: calls signalDisconnect()+		// -- main goroutine: calls Stop()+		// -- main goroutine: Stop() closes client connection+		// -- run goroutine: serveDAPCodec() gets "closed network connection"+		// -- run goroutine: serveDAPCodec() returns+		// -- run goroutine: serveDAPCodec calls signalDisconnect()+		s.config.DisconnectChan = nil+	}+}++// Run launches a new goroutine where it accepts a client connection+// and starts processing requests from it. Use Stop() to close connection.+// The server does not support multiple clients, serially or in parallel.+// The server should be restarted for every new debug session.+// The debugger won't be started until launch/attach request is received.+// TODO(polina): allow new client connections for new debug sessions,+// so the editor needs to launch delve only once?+func (s *Server) Run() {+	go func() {+		conn, err := s.listener.Accept()+		if err != nil {+			// This will print if the server is killed with Ctrl+C+			// before client connection is accepted.+			fmt.Printf("Error accepting client connection: %s\n", err)+			s.signalDisconnect()+			return+		}+		s.conn = conn+		s.serveDAPCodec()+	}()+}++// serveDAPCodec reads and decodes requests from the client+// until it encounters an error or EOF, when it sends+// the disconnect signal and returns.+func (s *Server) serveDAPCodec() {+	defer s.signalDisconnect()+	s.reader = bufio.NewReader(s.conn)+	for {+		request, err := dap.ReadProtocolMessage(s.reader)+		// TODO(polina): Differentiate between errors and handle them+		// gracefully. For example,+		// -- "use of closed network connection" means client connection+		// was closed via Stop() in response to a disconnect request.+		// -- "Request command 'foo' is not supported" means we+		// potentially got some new DAP request that we do not yet have+		// decoding support for, so we can respond with an ErrorResponse.+		// TODO(polina): to support this add Seq to+		// dap.DecodeProtocolMessageFieldError.+		if err != nil {+			if err != io.EOF {+				fmt.Println("DAP error:", err)+			}+			return+		}+		s.handleRequest(request)+	}+}++func (s *Server) handleRequest(request dap.Message) {+	jsonmsg, _ := json.Marshal(request)+	s.log.Debug("[<- from client]", string(jsonmsg))++	switch request := request.(type) {+	case *dap.InitializeRequest:+		s.onInitializeRequest(request)+	case *dap.LaunchRequest:+		s.onLaunchRequest(request)+	case *dap.AttachRequest:+		s.sendUnsupportedErrorResponse(request.Request)+	case *dap.DisconnectRequest:+		s.onDisconnectRequest(request)+	case *dap.TerminateRequest:+		s.sendUnsupportedErrorResponse(request.Request)+	case *dap.RestartRequest:+		s.sendUnsupportedErrorResponse(request.Request)+	case *dap.SetBreakpointsRequest:+		s.onSetBreakpointsRequest(request)+	case *dap.SetFunctionBreakpointsRequest:+		s.sendUnsupportedErrorResponse(request.Request)+	case *dap.SetExceptionBreakpointsRequest:+		s.onSetExceptionBreakpointsRequest(request)+	case *dap.ConfigurationDoneRequest:+		s.onConfigurationDoneRequest(request)+	case *dap.ContinueRequest:+		s.onContinueRequest(request)+	case *dap.NextRequest:+		s.sendUnsupportedErrorResponse(request.Request)+	case *dap.StepInRequest:+		s.sendUnsupportedErrorResponse(request.Request)+	case *dap.StepOutRequest:+		s.sendUnsupportedErrorResponse(request.Request)+	case *dap.StepBackRequest:+		s.sendUnsupportedErrorResponse(request.Request)+	case *dap.ReverseContinueRequest:+		s.sendUnsupportedErrorResponse(request.Request)+	case *dap.RestartFrameRequest:+		s.sendUnsupportedErrorResponse(request.Request)+	case *dap.GotoRequest:+		s.sendUnsupportedErrorResponse(request.Request)+	case *dap.PauseRequest:+		s.sendUnsupportedErrorResponse(request.Request)+	case *dap.StackTraceRequest:+		s.sendUnsupportedErrorResponse(request.Request)+	case *dap.ScopesRequest:+		s.sendUnsupportedErrorResponse(request.Request)+	case *dap.VariablesRequest:+		s.sendUnsupportedErrorResponse(request.Request)+	case *dap.SetVariableRequest:+		s.sendUnsupportedErrorResponse(request.Request)+	case *dap.SetExpressionRequest:+		s.sendUnsupportedErrorResponse(request.Request)+	case *dap.SourceRequest:+		s.sendUnsupportedErrorResponse(request.Request)+	case *dap.ThreadsRequest:+		s.sendUnsupportedErrorResponse(request.Request)+	case *dap.TerminateThreadsRequest:+		s.sendUnsupportedErrorResponse(request.Request)+	case *dap.EvaluateRequest:+		s.sendUnsupportedErrorResponse(request.Request)+	case *dap.StepInTargetsRequest:+		s.sendUnsupportedErrorResponse(request.Request)+	case *dap.GotoTargetsRequest:+		s.sendUnsupportedErrorResponse(request.Request)+	case *dap.CompletionsRequest:+		s.sendUnsupportedErrorResponse(request.Request)+	case *dap.ExceptionInfoRequest:+		s.sendUnsupportedErrorResponse(request.Request)+	case *dap.LoadedSourcesRequest:+		s.sendUnsupportedErrorResponse(request.Request)+	case *dap.DataBreakpointInfoRequest:+		s.sendUnsupportedErrorResponse(request.Request)+	case *dap.SetDataBreakpointsRequest:+		s.sendUnsupportedErrorResponse(request.Request)+	case *dap.ReadMemoryRequest:+		s.sendUnsupportedErrorResponse(request.Request)+	case *dap.DisassembleRequest:+		s.sendUnsupportedErrorResponse(request.Request)+	case *dap.CancelRequest:+		s.sendUnsupportedErrorResponse(request.Request)+	case *dap.BreakpointLocationsRequest:+		s.sendUnsupportedErrorResponse(request.Request)+	default:+		// This is a DAP message that go-dap has a struct for, so+		// decoding succeeded, but this function does not know how+		// to handle. We should be sending an ErrorResponse, but+		// we cannot get to Seq and other fields from dap.Message.+		// TODO(polina): figure out how to handle this better.+		// Consider adding GetSeq() method to dap.Message interface.+		panic(fmt.Sprintf("Unable to process %#v", request))

I'd rather log an error than panic here.

polinasok

comment created time in 5 days

Pull request review commentgo-delve/delve

service/dap: Initial implementation for 'dlv dap'

 func remove(path string) { 	} } +func dapCmd(cmd *cobra.Command, args []string) {+	status := func() int {+		if err := logflags.Setup(Log, LogOutput, LogDest); err != nil {+			fmt.Fprintf(os.Stderr, "%v\n", err)+			return 1+		}+		defer logflags.Close()++		if Headless {+			fmt.Fprintf(os.Stderr, "Warning: headless mode not supported with dap\n")+		}+		if AcceptMulti {+			fmt.Fprintf(os.Stderr, "Warning: accept multiclient mode not supported with dap\n")+		}+		if InitFile != "" {+			fmt.Fprint(os.Stderr, "Warning: init file ignored with dap\n")+		}+		if ContinueOnStart {+			fmt.Fprintf(os.Stderr, "Warning: continue ignored with dap; specify via launch/attach request instead\n")+		}+		if BuildFlags != "" {+			fmt.Fprintf(os.Stderr, "Warning: build flags ignored with dap; specify via launch/attach request instead\n")+		}+		if WorkingDir != "" {+			fmt.Fprintf(os.Stderr, "Warning: working directory ignored with dap; launch requests must specify full program path\n")+		}+		dlvArgs, targetArgs := splitArgs(cmd, args)+		if len(dlvArgs) > 0 {+			fmt.Fprintf(os.Stderr, "Warning: debug arguments ignored with dap; specify via launch/attach request instead\n")+		}+		if len(targetArgs) > 0 {+			fmt.Fprintf(os.Stderr, "Warning: program flags ignored with dap; specify via launch/attach request instead\n")+		}++		listener, err := net.Listen("tcp", Addr)+		if err != nil {+			fmt.Printf("couldn't start listener: %s\n", err)

os.Stderr?

polinasok

comment created time in 5 days

Pull request review commentgo-delve/delve

service/dap: Initial implementation for 'dlv dap'

 func Setup(logFlag bool, logstr string, logDest string) error { 			debugLineErrors = true 		case "rpc": 			rpc = true+		case "dap":

The values that we recognize here are documented inside cmd/dlv/cmds/commands.go (grep for "Help about logging flags"). Please add it there too.

polinasok

comment created time in 5 days

Pull request review commentgo-delve/delve

proc/gdbserial: Reload thread registers on demand

 func (p *Process) initialize(path string, debugInfoDirs []string) error { 		} 	} +	p.clearThreadSignals()

This call needs to happen after updateThreadList, that's the order we were doing things before. As it is this change breaks the lldb backend on linux. It's weird that it doesn't also break it on macOS, I guess debugserver works slightly differently.

The reason is that on linux when we check the thread status after attaching it reports the threads as stopped with signal '13' (I assume it's the signal it used to stop them) but we don't want to propagate that back to the threads.

derekparker

comment created time in 5 days

Pull request review commentgo-delve/delve

proc/gdbserial: Reload thread registers on demand

 func (p *Process) Restart(pos string) error { 		return err 	} +	p.clearThreadRegisters()+	p.clearThreadSignals()

Same as above, it should be updateThreadList, clearThreadSignals, clearThreadRegisters (I don't think it matters here, but I'd still do it for consistency).

derekparker

comment created time in 5 days

issue commentgolang/go

proposal: cmd/link: Include build meta information

The compiler flags are already saved in the DW_AT_producer attribute in debug_info.

michael-obermueller

comment created time in 6 days

Pull request review commentgo-delve/delve

proc: only format registers value when it's necessary

 func (a *ARM64) AddrAndStackRegsToDwarfRegisters(staticBase, pc, sp, bp, lr uint 		LRRegNum:   arm64DwarfLRRegNum, 	} }++func (a *ARM64) DwarfRegisterToString(name string, reg *op.DwarfRegister) string {

I think it would be a shame to make it into an interface just for this thing, given that everything else is the same. Also we'll have to go back from register numbers to this interface objects somehow, and I don't know how good that would look.

aarzilli

comment created time in 6 days

Pull request review commentgo-delve/delve

proc: only format registers value when it's necessary

 func (a *ARM64) AddrAndStackRegsToDwarfRegisters(staticBase, pc, sp, bp, lr uint 		LRRegNum:   arm64DwarfLRRegNum, 	} }++func (a *ARM64) DwarfRegisterToString(name string, reg *op.DwarfRegister) string {

Correction: it would be proc.Register not op.DwarfRegister to get the String method, of course.

aarzilli

comment created time in 6 days

push eventaarzilli/delve

aarzilli

commit sha c1c90bfb50269016db8e8bae62616b042fa10831

proc: only format registers value when it's necessary A significant amount of time is spent generating the string representation for the proc.Registers object of each thread, since this field is rarely used (only when the Registers API is called) it should be generated on demand. Also by changing the internal representation of proc.Register to be closer to that of op.DwarfRegister it will help us implement #1838 (when Delve will need to be able to display the registers of an internal frame, which we currently represent using op.DwarfRegister objects). Benchmark before: BenchmarkConditionalBreakpoints-4 1 22292554301 ns/op Benchmark after: BenchmarkConditionalBreakpoints-4 1 17326345671 ns/op Reduces conditional breakpoint latency from 2.2ms to 1.7ms. Updates #1549, #1838

view details

push time in 6 days

Pull request review commentgo-delve/delve

proc: only format registers value when it's necessary

 func (a *ARM64) AddrAndStackRegsToDwarfRegisters(staticBase, pc, sp, bp, lr uint 		LRRegNum:   arm64DwarfLRRegNum, 	} }++func (a *ARM64) DwarfRegisterToString(name string, reg *op.DwarfRegister) string {

It doesn't use a but it's different depending on the architecture. One thing we could do is stuff a function pointer inside op.DwarfRegister and then its string method would be:

func (reg *DwarfRegister) String() string {
    return reg.fmtFunc(reg)
}

However at some point we'll also want a way to get the register name given a register number and DwarfRegisterToString could be extended to do that, otherwise we'll need something else (probably another method of Arch).

aarzilli

comment created time in 6 days

Pull request review commentgo-delve/delve

proc: only format registers value when it's necessary

 var amd64DwarfToName = map[int]string{ 	66: "SW", } +var amd64NameToDwarf = func() map[string]int {

I didn't come up with this, I saw it somewhere.

aarzilli

comment created time in 6 days

issue commentgo-delve/delve

testing: should "make test" use the same flags as CI?

I'd like to keep the vendor directory, I think repositories should be self contained and I think that we should test that what we have in vendor works (although that's less of a priority now that go itself checks the integrity of the vendor directory, to some extent).

I don't have strong feelings either for or against adding GOFLAGS=-mod=vendor to make.go. One argument against is that adding a temporary replace directive to go.mod to test something is harder when -mod=vendor is involved.

Maybe we should just have a better way to check that go.mod and vendor match.

eliben

comment created time in 6 days

Pull request review commentgo-delve/delve

Implement reverse step, next and stepout

 continueLoop: 		// in backward continue mode. 		case 0: 			if p.conn.direction == proc.Backward {+				p.ClearInternalBreakpoints()

No, we are giving the option to the frontend to either cancel the next operation or to execute another continue and finish it. Both the command line frontend and gdlv are taking advantage of this.

aarzilli

comment created time in 6 days

push eventaarzilli/delve

chainhelen

commit sha f5608c7712a93288079401b2ca2d10cef267faf0

pkg/terminal: tolerate spurious spaces between arguments of cli. Expression such as: config show-location-expr true disassemble -a 0x4a23a0 0x4a23f2 disassemble -a 0x4a23a0 0x4a23f2 should all execute correctly. Extend #795.

view details

chainhelen

commit sha 1ac990c705b47fbfb5eb0306bdb9d5c942a41085

README: Remove gitter chat link. Abandon gitter chat link, just leave the mailing list. Fix #1842

view details

Dmitry Neverov

commit sha ca3fe8889969b752d9bddd8d97ce9ffd4de7862b

proc,service: expose goroutine pprof labels in api Labels can help in identifying a particular goroutine during debugging. Fixes #1763

view details

Dmitry Neverov

commit sha 7fe81ca1d6743c9a4fcf494f9eb6d0c2030ad337

cache computed labels

view details

hengwu0

commit sha 3f7571ec30d35ea7e47073bd60bdfbcb31e646b1

proc: implement stacktrace of arm64 (#1780) * proc: separate amd64-arch code separate amd64 code about stacktrace, so we can add arm64 stacktrace code. * proc: implemente stacktrace of arm64 * delve now can use stack, frame commands on arm64-arch debug. Co-authored-by: tykcd996 <tang.yuke@zte.com.cn> Co-authored-by: hengwu0 <wu.heng@zte.com.cn> * test: remove skip-code of stacktrace on arm64 * add LR DWARF register and remove skip-code for fixed tests * proc: fix the Continue command after the hardcoded breakpoint on arm64 Arm64 use hardware breakpoint, and it will not set PC to the next instruction like amd64. We should move PC in both runtime.breakpoints and hardcoded breakpoints(probably cgo). * proc: implement cgo stacktrace on arm64 * proc: combine amd64_stack.go and arm64_stack.go file * proc: reorganize the stacktrace code * move SwitchStack function arch-related * fix Continue command after manual stop on arm64 * add timeout flag to make.go to enable infinite timeouts Co-authored-by: aarzilli <alessandro.arzilli@gmail.com> Co-authored-by: hengwu0 <wu.heng@zte.com.cn> Co-authored-by: tykcd996 <56993522+tykcd996@users.noreply.github.com> Co-authored-by: Alessandro Arzilli <alessandro.arzilli@gmail.com>

view details

Derek Parker

commit sha 94a20d57da0195de0e5982e421adbf9b71437dc8

pkg/proc: Introduce Target and remove CommonProcess (#1834) * pkg/proc: Introduce Target * pkg/proc: Remove Common.fncallEnabled Realistically we only block it on recorded backends. * pkg/proc: Move fncallForG to Target * pkg/proc: Remove CommonProcess Remove final bit of functionality stored in CommonProcess and move it to *Target. * pkg/proc: Add SupportsFunctionCall to Target

view details

Derek Parker

commit sha 75d7266dc0da0f983e066451ede2a1cfd22e3785

*: Update AppVeyor badge It seems the project in AppVeyor moved from go-delve to my AppVeyor account and I'm not sure how that happened. However, it did break the badge rendering on the README so this patch fixes it with the current correct link.

view details

chainhelen

commit sha ff5734a0d8801bd0cbeba50f3f57a36108d6bdbd

pkg: fix abbreviation of Flag Register on amd64 (#1845)

view details

chainhelen

commit sha dee267b68b31883c4385af97adfe0aee1088739a

pkg/proc: fix typo in the comment of `PtraceGetFpRegset` (#1848)

view details

aarzilli

commit sha 279c29a37c2b3c3357d2489010b2273ee20459a6

proc: remove CX method from proc.Registers It is not used anymore besides internally by the proc/gdbserial backend.

view details

aarzilli

commit sha fc3e01bb5b62d080f629b8deffeb4a26b903e9c1

tests: add benchmark for conditional breakpoints

view details

aarzilli

commit sha 7eddfb77b90d0e6bf80bdf8330ee5e837e201fe5

dwarf/reader: precalcStack does not need to read past the first entry It was reading all the way to the end of the debug_info section, slowing down stacktraces substantially. Benchmark before: BenchmarkConditionalBreakpoints-4 1 80344642562 ns/op Benchmark after: BenchmarkConditionalBreakpoints-4 1 22218288218 ns/op i.e. a reduction of the cost of a breakpoint hit from 8ms to 2.2ms Updates #1549

view details

chainhelen

commit sha f925d3c8d781ad04cdbe89b020f8cf445b3f3b80

pkg/proc: remove meanless code in dwarf_expr_test.go. (#1850)

view details

Stig Otnes Kolstad

commit sha bc9d95d615987ae78b946dd18ef1989775c705e8

Documentation/cli: add info for element limit config keys (#1853)

view details

Alessandro Arzilli

commit sha 0741d3e57fc1f4cc45d98e6f9a1324c528a8b88c

*: Go 1.14 support branch (#1727) * tests: misc test fixes for go1.14 - math.go is now ambiguous due to changes to the go runtime so specify that we mean our own math.go in _fixtures - go list -m requires vendor-mode to be disabled so pass '-mod=' to it in case user has GOFLAGS=-mod=vendor - update version of go/packages, required to work with go 1.14 (and executed go mod vendor) - Increased goroutine migration in one development version of Go 1.14 revealed a problem with TestCheckpoints in command_test.go and rr_test.go. The tests were always wrong because Restart(checkpoint) doesn't change the current thread but we can't assume that when the checkpoint was taken the current goroutine was running on the same thread. * goversion: update maximum supported version * Makefile: disable testing lldb-server backend on linux with Go 1.14 There seems to be some incompatibility with lldb-server version 6.0.0 on linux and Go 1.14. * proc/gdbserial: better handling of signals - if multiple signals are received simultaneously propagate all of them to the target threads instead of only one. - debugserver will drop an interrupt request if a target thread simultaneously receives a signal, handle this situation. * dwarf/line: normalize backslashes for windows executables Starting with Go 1.14 the compiler sometimes emits backslashes as well as forward slashes in debug_line, normalize everything to / for conformity with the behavior of previous versions. * proc/native: partial support for Windows async preempt mechanism See https://github.com/golang/go/issues/36494 for a description of why full support for 1.14 under windows is problematic. * proc/native: disable Go 1.14 async preemption on Windows See https://github.com/golang/go/issues/36494

view details

chainhelen

commit sha bd279cb9da918b173923f8dbe84f837035106a26

pkg/proc: optimize code for supporting different arch in the future. (#1849)

view details

aarzilli

commit sha 99532c405a2d96203cd96e65f03ed381070d734f

all: bump version number and release notes Thank you to: @stapelberg, @hengwu0, @tykcd996, @chainhelen, @alexbrainman, @nd and @stigok.

view details

Alessandro Arzilli

commit sha 81a86086dd9974d5bab76b1749d7450bb6b5cfab

cmd/dlv: Fix same-user check and add flag to disable it (#1839) * service: also search IPv6 connections when checking user When checking if the user is allowed to connect to this Delve instance also search IPv6 connections even though the local address is IPv4. Fixes #1835 * cmd: add flag to disable same-user check Fixes #1835

view details

aarzilli

commit sha a7327d9816a1a878b323914b5926841448f4d10b

proc: move defer breakpoint code into a function Moves the code that sets a breakpoint on the first deferred function, used by both next and StepOut, to its function.

view details

aarzilli

commit sha e93ad54545cd35566e3929890e97f8b6ab97dc37

proc: implement reverse step/next/stepout When the direction of execution is reversed (on a recording) Step, Next and StepOut will behave similarly to their forward version. However there are some subtle interactions between their behavior, prologue skipping, deferred calls and normal calls. Specifically: - when stepping backwards we need to set a breakpoint on the first instruction after each CALL instruction, once this breakpoint is reached we need to execute a single StepInstruction operation to reverse step into the CALL. - to insure that the prologue is skipped reverse next needs to check if it is on the first instruction after the prologue, and if it is behave like reverse stepout. - there is no reason to set breakpoints on deferred calls when reverse nexting or reverse stepping out, they will never be hit. - reverse step out should generally place its breakpoint on the CALL instruction that created the current stack frame (which will be the CALL instruction immediately preceding the instruction at the return address). - reverse step out needs to treat panic calls and deferreturn calls specially.

view details

push time in 6 days

pull request commentgo-delve/delve

Implement reverse step, next and stepout

we'll have to think how to handle that ClearInternalBreakpoints call in Target instead of in the backend.

I'd just store the output of NewTarget into a field of gdbserial.Process and do p.tgt.ClearInternalBreakpoints. Deduplicating code in the backends is fine but I think it's silly that we are twisting ourselves into a knot to hide state from the backends.

aarzilli

comment created time in 6 days

pull request commentgo-delve/delve

proc: only format registers value when it's necessary

I realize that the amd64DwarfToName map isn't used anymore, except to build it's inverse map (amd64NameToDwarf) and so we could remove. However I'd like to keep it around since it will eventually be used to implement #1838 (we'll need a way to put a name on registers for which we only have a dwarf register number).

Also, I explored the possibility of a much more radical change where the backend specifies the DWARF register number directly (instead of the thing we have now where we find the register number by using the register name). Technically it's possible, however for the gdbserial backend we only have the register names so it would simply push the ugly (register name → register number) table down, so it isn't really worth it.

Finally the extra register names that get added manually in the constructor of amd64NameToDwarf are alternate names for some registers that mozilla rr uses.

aarzilli

comment created time in 6 days

PR opened go-delve/delve

proc: only format registers value when it's necessary
proc: only format registers value when it's necessary

A significant amount of time is spent generating the string
representation for the proc.Registers object of each thread, since this
field is rarely used (only when the Registers API is called) it should
be generated on demand.

Also by changing the internal representation of proc.Register to be
closer to that of op.DwarfRegister it will help us implement #1838
(when Delve will need to be able to display the registers of an
internal frame, which we currently represent using op.DwarfRegister
objects).

Benchmark before:

BenchmarkConditionalBreakpoints-4   	       1	22292554301 ns/op

Benchmark after:

BenchmarkConditionalBreakpoints-4   	       1	17326345671 ns/op

Reduces conditional breakpoint latency from 2.2ms to 1.7ms.

Updates #1549, #1838

+293 -282

0 comment

13 changed files

pr created time in 6 days

push eventaarzilli/delve

chainhelen

commit sha dee267b68b31883c4385af97adfe0aee1088739a

pkg/proc: fix typo in the comment of `PtraceGetFpRegset` (#1848)

view details

aarzilli

commit sha 279c29a37c2b3c3357d2489010b2273ee20459a6

proc: remove CX method from proc.Registers It is not used anymore besides internally by the proc/gdbserial backend.

view details

aarzilli

commit sha fc3e01bb5b62d080f629b8deffeb4a26b903e9c1

tests: add benchmark for conditional breakpoints

view details

aarzilli

commit sha 7eddfb77b90d0e6bf80bdf8330ee5e837e201fe5

dwarf/reader: precalcStack does not need to read past the first entry It was reading all the way to the end of the debug_info section, slowing down stacktraces substantially. Benchmark before: BenchmarkConditionalBreakpoints-4 1 80344642562 ns/op Benchmark after: BenchmarkConditionalBreakpoints-4 1 22218288218 ns/op i.e. a reduction of the cost of a breakpoint hit from 8ms to 2.2ms Updates #1549

view details

chainhelen

commit sha f925d3c8d781ad04cdbe89b020f8cf445b3f3b80

pkg/proc: remove meanless code in dwarf_expr_test.go. (#1850)

view details

Stig Otnes Kolstad

commit sha bc9d95d615987ae78b946dd18ef1989775c705e8

Documentation/cli: add info for element limit config keys (#1853)

view details

Alessandro Arzilli

commit sha 0741d3e57fc1f4cc45d98e6f9a1324c528a8b88c

*: Go 1.14 support branch (#1727) * tests: misc test fixes for go1.14 - math.go is now ambiguous due to changes to the go runtime so specify that we mean our own math.go in _fixtures - go list -m requires vendor-mode to be disabled so pass '-mod=' to it in case user has GOFLAGS=-mod=vendor - update version of go/packages, required to work with go 1.14 (and executed go mod vendor) - Increased goroutine migration in one development version of Go 1.14 revealed a problem with TestCheckpoints in command_test.go and rr_test.go. The tests were always wrong because Restart(checkpoint) doesn't change the current thread but we can't assume that when the checkpoint was taken the current goroutine was running on the same thread. * goversion: update maximum supported version * Makefile: disable testing lldb-server backend on linux with Go 1.14 There seems to be some incompatibility with lldb-server version 6.0.0 on linux and Go 1.14. * proc/gdbserial: better handling of signals - if multiple signals are received simultaneously propagate all of them to the target threads instead of only one. - debugserver will drop an interrupt request if a target thread simultaneously receives a signal, handle this situation. * dwarf/line: normalize backslashes for windows executables Starting with Go 1.14 the compiler sometimes emits backslashes as well as forward slashes in debug_line, normalize everything to / for conformity with the behavior of previous versions. * proc/native: partial support for Windows async preempt mechanism See https://github.com/golang/go/issues/36494 for a description of why full support for 1.14 under windows is problematic. * proc/native: disable Go 1.14 async preemption on Windows See https://github.com/golang/go/issues/36494

view details

chainhelen

commit sha bd279cb9da918b173923f8dbe84f837035106a26

pkg/proc: optimize code for supporting different arch in the future. (#1849)

view details

aarzilli

commit sha 99532c405a2d96203cd96e65f03ed381070d734f

all: bump version number and release notes Thank you to: @stapelberg, @hengwu0, @tykcd996, @chainhelen, @alexbrainman, @nd and @stigok.

view details

aarzilli

commit sha 69de36f16cdb35c2248b01a4c614ab2ee2ee57fc

proc: only format registers value when it's necessary A significant amount of time is spent generating the string representation for the proc.Registers object of each thread, since this field is rarely used (only when the Registers API is called) it should be generated on demand. Also by changing the internal representation of proc.Register to be closer to that of op.DwarfRegister it will help us implement #1838 (when Delve will need to be able to display the registers of an internal frame, which we currently represent using op.DwarfRegister objects). Benchmark before: BenchmarkConditionalBreakpoints-4 1 22292554301 ns/op Benchmark after: BenchmarkConditionalBreakpoints-4 1 17326345671 ns/op Reduces conditional breakpoint latency from 2.2ms to 1.7ms. Updates #1549, #1838

view details

push time in 6 days

push eventgo-delve/delve

aarzilli

commit sha 99532c405a2d96203cd96e65f03ed381070d734f

all: bump version number and release notes Thank you to: @stapelberg, @hengwu0, @tykcd996, @chainhelen, @alexbrainman, @nd and @stigok.

view details

push time in 7 days

PR merged go-delve/delve

Delve 1.4.0 release notes
all: bump version number and release notes

Thank you to: @stapelberg, @hengwu0, @tykcd996, @chainhelen,
@alexbrainman, @nd and @stigok.

+29 -1

1 comment

2 changed files

aarzilli

pr closed time in 7 days

push eventaarzilli/delve

Alessandro Arzilli

commit sha 0741d3e57fc1f4cc45d98e6f9a1324c528a8b88c

*: Go 1.14 support branch (#1727) * tests: misc test fixes for go1.14 - math.go is now ambiguous due to changes to the go runtime so specify that we mean our own math.go in _fixtures - go list -m requires vendor-mode to be disabled so pass '-mod=' to it in case user has GOFLAGS=-mod=vendor - update version of go/packages, required to work with go 1.14 (and executed go mod vendor) - Increased goroutine migration in one development version of Go 1.14 revealed a problem with TestCheckpoints in command_test.go and rr_test.go. The tests were always wrong because Restart(checkpoint) doesn't change the current thread but we can't assume that when the checkpoint was taken the current goroutine was running on the same thread. * goversion: update maximum supported version * Makefile: disable testing lldb-server backend on linux with Go 1.14 There seems to be some incompatibility with lldb-server version 6.0.0 on linux and Go 1.14. * proc/gdbserial: better handling of signals - if multiple signals are received simultaneously propagate all of them to the target threads instead of only one. - debugserver will drop an interrupt request if a target thread simultaneously receives a signal, handle this situation. * dwarf/line: normalize backslashes for windows executables Starting with Go 1.14 the compiler sometimes emits backslashes as well as forward slashes in debug_line, normalize everything to / for conformity with the behavior of previous versions. * proc/native: partial support for Windows async preempt mechanism See https://github.com/golang/go/issues/36494 for a description of why full support for 1.14 under windows is problematic. * proc/native: disable Go 1.14 async preemption on Windows See https://github.com/golang/go/issues/36494

view details

chainhelen

commit sha bd279cb9da918b173923f8dbe84f837035106a26

pkg/proc: optimize code for supporting different arch in the future. (#1849)

view details

aarzilli

commit sha a6c08e22ce6bb3e7dac69485556513e6de6fc033

all: bump version number and release notes Thank you to: @stapelberg, @hengwu0, @tykcd996, @chainhelen, @alexbrainman, @nd and @stigok.

view details

push time in 7 days

Pull request review commentgo-delve/delve

pkg/proc: Move breakpoint logic up to Target

 continueLoop: 	}  	if err := p.updateThreadList(&tu); err != nil {-		return nil, err+		return nil, nil, err 	}  	if p.BinInfo().GOOS == "linux" { 		if err := linutil.ElfUpdateSharedObjects(p); err != nil {-			return nil, err+			return nil, nil, err 		} 	} -	if err := p.setCurrentBreakpoints(); err != nil {-		return nil, err-	}--	for _, thread := range p.threads {-		if thread.strID == threadID {-			var err error-			switch sig {-			case 0x91:-				err = errors.New("bad access")-			case 0x92:-				err = errors.New("bad instruction")-			case 0x93:-				err = errors.New("arithmetic exception")-			case 0x94:-				err = errors.New("emulation exception")-			case 0x95:-				err = errors.New("software exception")-			case 0x96:-				err = errors.New("breakpoint exception")+	tid, err := strconv.ParseUint(threadID, 16, 32)+	if thread, ok := p.threads[int(tid)]; err == nil && ok {+		var err error+		switch sig {+		case 0x91:+			err = errors.New("bad access")+		case 0x92:+			err = errors.New("bad instruction")+		case 0x93:+			err = errors.New("arithmetic exception")+		case 0x94:+			err = errors.New("emulation exception")+		case 0x95:+			err = errors.New("software exception")+		case 0x96:+			err = errors.New("breakpoint exception")+		}+		r := make([]proc.Thread, 0, len(p.threads))+		for _, t := range p.threads {

I've added back in Thread.setbp in the logic of which threads to return

It's incomplete, setbp won't be set unless threadStopInfo is true. The whole logic from the old setCurrentBreakpoints should be used to decide which threads to return. IIRC just doing this will break the rr backend.

derekparker

comment created time in 8 days

Pull request review commentgo-delve/delve

pkg/proc: Move breakpoint logic up to Target

 package proc +import (+	"errors"+	"go/ast"+)++// ErrDirChange is returned when trying to change execution direction+// while there are still internal breakpoints set.+var ErrDirChange = errors.New("direction change with internal breakpoints")+ // Target represents the process being debugged. type Target struct { 	Process +	// Breakpoint table, holds information on breakpoints.+	// Maps instruction address to Breakpoint struct.+	breakpoints BreakpointMap+ 	// fncallForG stores a mapping of current active function calls. 	fncallForG map[int]*callInjection  	// gcache is a cache for Goroutines that we 	// have read and parsed from the targets memory. 	// This must be cleared whenever the target is resumed. 	gcache goroutineCache++	// threadToBreakpoint maps threads to the breakpoint that they+	// have were trapped on.+	threadToBreakpoint map[int]*BreakpointState

Responding point to point:

  1. Agree, not much of a win given that it's just a field and a getter
  2. There are two places in this patch where Target.threadToBreakpoint is iterated, by contrast there are four places where we have a Thread and we want to get its breakpoint
  3. Of the two places where we iterate threadToBreakpoint one is the thread resume logic where we do not want to just wipe the map because each thread needs to be single stepped, so in 50% of the cases the easy cleanup is not a win
  4. In principle, I'd rather the backend knew. Even if it doesn't need to know now it might in the future and there's no reason to erect artificial barriers to it
derekparker

comment created time in 8 days

Pull request review commentgo-delve/delve

pkg/proc: Move breakpoint logic up to Target

 func StepInstruction(dbp *Target) (err error) { 	if ok, err := dbp.Valid(); !ok { 		return err 	}-	thread.Breakpoint().Clear()-	err = thread.StepInstruction()-	if err != nil {-		return err-	}-	err = thread.SetCurrentBreakpoint(true)-	if err != nil {++	if err := dbp.threadStepInstruction(thread, true); err != nil { 		return err 	}+ 	if tg, _ := GetG(thread); tg != nil { 		dbp.SetSelectedGoroutine(tg) 	} 	return nil } +func (dbp *Target) threadStepInstruction(thread Thread, setbp bool) error {+	bp := dbp.ThreadToBreakpoint(thread)+	if bp.Breakpoint != nil {+		if err := dbp.Process.ClearBreakpointFn(bp.Addr, bp.OriginalData); err != nil {+			return err+		}+		defer dbp.Process.WriteBreakpointFn(bp.Addr)+	}+	if err := thread.StepInstruction(); err != nil {

Perhaps that's a sign the method is doing too much

You are not wrong, but the gdbserial backend was designed to always keep the regs field of Thread always updated. I did that because it was simpler but doing it when it's needed and memoizing is definitely the way forward.

However it's a rather complicated change so it should be done in a separate PR, rather than as part of an unrelated refactor. And either that goes first or this PR goes in without moving the thread resume logic out of the backends.

derekparker

comment created time in 8 days

Pull request review commentgo-delve/delve

pkg/proc: Move breakpoint logic up to Target

 type ProcessManipulation interface {  // BreakpointManipulation is an interface for managing breakpoints. type BreakpointManipulation interface {-	Breakpoints() *BreakpointMap-	SetBreakpoint(addr uint64, kind BreakpointKind, cond ast.Expr) (*Breakpoint, error)-	ClearBreakpoint(addr uint64) (*Breakpoint, error)-	ClearInternalBreakpoints() error+	WriteBreakpointFn(addr uint64) (string, int, *Function, []byte, error)

I'm fine with that as long as we agree it's temporary

Yes, and separating public and private interfaces will also help us determine when we've arrived since the public interface will be empty.

derekparker

comment created time in 8 days

Pull request review commentgo-delve/delve

pkg/proc: Move breakpoint logic up to Target

 type Info interface { 	BinInfo() *BinaryInfo 	EntryPoint() (uint64, error) +	AdjustsPCAfterBreakpoint() bool

Alright. I think you are right and I'm wrong on this point. However I think the objection to the naming scheme is stil valid. Also I'm wondering if there aren't better solutions for this instead of calling a method, for example having a configuration parameter passed to NewTarget/PostInitializationSetup or, since it's only used by proc.Continue, as a return value from ContinueOnce.

derekparker

comment created time in 8 days

issue commentaarzilli/golua

Support metatables for GoStruct

Sounds reasonable. Care to make a PR?

andreysm

comment created time in 8 days

push eventaarzilli/delve

aarzilli

commit sha 23d4b910c0328eaa4968e42f7d0f0df9ef08d3bc

proc/native: partial support for Windows async preempt mechanism See https://github.com/golang/go/issues/36494 for a description of why full support for 1.14 under windows is problematic.

view details

aarzilli

commit sha b576068b6fb438f440d0ed9d7911f21cc0fc15dc

proc/native: disable Go 1.14 async preemption on Windows See https://github.com/golang/go/issues/36494

view details

push time in 9 days

Pull request review commentgo-delve/delve

Go 1.14 support branch

 func (t *Thread) singleStep() error { 		return err 	} -	_, err = _ResumeThread(t.os.hThread)-	if err != nil {-		return err+	suspendcnt := 0++	for {

Done.

aarzilli

comment created time in 9 days

Pull request review commentgo-delve/delve

Go 1.14 support branch

 func FirstPCAfterPrologue(p Process, fn *Function, sameline bool) (uint64, error  	return pc, nil }++func setAsyncPreemptOff(p *Target, v int64) {+	logger := p.BinInfo().logger+	if producer := p.BinInfo().Producer(); producer == "" || !goversion.ProducerAfterOrEqual(producer, 1, 14) {+		return+	}+	scope := globalScope(p.BinInfo(), p.BinInfo().Images[0], p.CurrentThread())+	debugv, err := scope.findGlobal("runtime", "debug")+	if err != nil || debugv.Unreadable != nil {+		logger.Warnf("could not find runtime/debug variable (or unreadable): %v %v", err, debugv.Unreadable)+		return+	}+	asyncpreemptoffv, err := debugv.structMember("asyncpreemptoff")+	if err != nil {+		logger.Warnf("could not find asyncpreemptoff field: %v", err)+		return+	}+	asyncpreemptoffv.loadValue(loadFullValue)+	if asyncpreemptoffv.Unreadable != nil {+		logger.Warnf("asyncpreemptoff field unreadable: %v", asyncpreemptoffv.Unreadable)+		return+	}+	p.asyncPreemptChanged = true+	p.asyncPreemptOff, _ = constant.Int64Val(asyncpreemptoffv.Value)++	err = scope.setValue(asyncpreemptoffv, newConstant(constant.MakeInt64(v), scope.Mem), "")+	logger.Warnf("could not set asyncpreemptoff %v", err)+}++func RestoreAsyncPreempt(p *Target) {

Removed.

aarzilli

comment created time in 9 days

Pull request review commentgo-delve/delve

Go 1.14 support branch

 func (t *Target) Restart(from string) error { 	t.ClearAllGCache() 	return t.Process.Restart(from) }++func (t *Target) Detach(kill bool) error {+	if !kill && t.asyncPreemptChanged {+		setAsyncPreemptOff(t, t.asyncPreemptOff)

Removed.

aarzilli

comment created time in 9 days

pull request commentgo-delve/delve

Delve 1.4.0 release notes [WIP]

Waiting for Go 1.14 support to be merged.

aarzilli

comment created time in 12 days

PR opened go-delve/delve

Delve 1.4.0 release notes [WIP]
all: bump version number and release notes

Thank you to: @stapelberg, @hengwu0, @tykcd996, @chainhelen,
@alexbrainman, @nd and @stigok.

+29 -1

0 comment

2 changed files

pr created time in 12 days

push eventaarzilli/delve

aarzilli

commit sha f096ce68b2692b6a72ad446181ac500ca6cbf487

all: bump version number and release notes Thank you to: @stapelberg, @hengwu0, @tykcd996, @chainhelen, @alexbrainman, @nd and @stigok.

view details

push time in 12 days

create barnchaarzilli/delve

branch : release1.4.0

created branch time in 12 days

pull request commentgo-delve/delve

pkg/proc: Move breakpoint logic up to Target

As an alternative, most of the (actually) duplicated code between backends is the FindBreakpoint and SetCurrentBreakpoint methods. If we added a single method:

func (bpmap *BreakpointMap) EvalBreakpointForThread(th Thread, adjustPC bool) BreakpointState

then both FindBreakpoint and SetCurrentBreakpoint could be removed and calls to SetCurrentBreakpoint could be replaced with:

th.CurrentBreakpoint = p.Breakpoints().EvalBreakpointForThread(th, adjustPC)

This would remove the duplicate code, cause less disruption and leave the breakpoint state where it belongs.

derekparker

comment created time in 13 days

Pull request review commentgo-delve/delve

pkg/proc: Move breakpoint logic up to Target

 continueLoop: 	}  	if err := p.updateThreadList(&tu); err != nil {-		return nil, err+		return nil, nil, err 	}  	if p.BinInfo().GOOS == "linux" { 		if err := linutil.ElfUpdateSharedObjects(p); err != nil {-			return nil, err+			return nil, nil, err 		} 	} -	if err := p.setCurrentBreakpoints(); err != nil {-		return nil, err-	}--	for _, thread := range p.threads {-		if thread.strID == threadID {-			var err error-			switch sig {-			case 0x91:-				err = errors.New("bad access")-			case 0x92:-				err = errors.New("bad instruction")-			case 0x93:-				err = errors.New("arithmetic exception")-			case 0x94:-				err = errors.New("emulation exception")-			case 0x95:-				err = errors.New("software exception")-			case 0x96:-				err = errors.New("breakpoint exception")+	tid, err := strconv.ParseUint(threadID, 16, 32)+	if thread, ok := p.threads[int(tid)]; err == nil && ok {+		var err error+		switch sig {+		case 0x91:+			err = errors.New("bad access")+		case 0x92:+			err = errors.New("bad instruction")+		case 0x93:+			err = errors.New("arithmetic exception")+		case 0x94:+			err = errors.New("emulation exception")+		case 0x95:+			err = errors.New("software exception")+		case 0x96:+			err = errors.New("breakpoint exception")+		}+		r := make([]proc.Thread, 0, len(p.threads))+		for _, t := range p.threads {

I don't like that we are not using Thread.setbp here and instead just setting a breakpoint on every thread. I think the new logic should follow the old logic. Also the change to the loop looking for the thread doesn't have anything to do with this PR and I'd like to leave it out simply to minimize the patch size.

derekparker

comment created time in 13 days

Pull request review commentgo-delve/delve

pkg/proc: Move breakpoint logic up to Target

 package proc +import (+	"errors"+	"go/ast"+)++// ErrDirChange is returned when trying to change execution direction+// while there are still internal breakpoints set.+var ErrDirChange = errors.New("direction change with internal breakpoints")+ // Target represents the process being debugged. type Target struct { 	Process +	// Breakpoint table, holds information on breakpoints.+	// Maps instruction address to Breakpoint struct.+	breakpoints BreakpointMap+ 	// fncallForG stores a mapping of current active function calls. 	fncallForG map[int]*callInjection  	// gcache is a cache for Goroutines that we 	// have read and parsed from the targets memory. 	// This must be cleared whenever the target is resumed. 	gcache goroutineCache++	// threadToBreakpoint maps threads to the breakpoint that they+	// have were trapped on.+	threadToBreakpoint map[int]*BreakpointState

I don't like that a chunk of the thread state now goes into a map instead of being in its logical place (inside the thread object).

derekparker

comment created time in 13 days

Pull request review commentgo-delve/delve

pkg/proc: Move breakpoint logic up to Target

 func StepInstruction(dbp *Target) (err error) { 	if ok, err := dbp.Valid(); !ok { 		return err 	}-	thread.Breakpoint().Clear()-	err = thread.StepInstruction()-	if err != nil {-		return err-	}-	err = thread.SetCurrentBreakpoint(true)-	if err != nil {++	if err := dbp.threadStepInstruction(thread, true); err != nil { 		return err 	}+ 	if tg, _ := GetG(thread); tg != nil { 		dbp.SetSelectedGoroutine(tg) 	} 	return nil } +func (dbp *Target) threadStepInstruction(thread Thread, setbp bool) error {+	bp := dbp.ThreadToBreakpoint(thread)+	if bp.Breakpoint != nil {+		if err := dbp.Process.ClearBreakpointFn(bp.Addr, bp.OriginalData); err != nil {+			return err+		}+		defer dbp.Process.WriteBreakpointFn(bp.Addr)+	}+	if err := thread.StepInstruction(); err != nil {

The way this works out is that it becomes far more inefficient in the gdbserial backend, the original code called the internal gdbserial.(*Thread).stepInstruction, which did not reload the thread registers (because we don't need to know in this situation), while this calls StepInstruction which will reload the threads.

derekparker

comment created time in 13 days

Pull request review commentgo-delve/delve

pkg/proc: Move breakpoint logic up to Target

 func OpenCore(corePath, exePath string, debugInfoDirs []string) (*proc.Target, e // initialize for core files doesn't do much // aside from call the post initialization setup. func (p *Process) initialize(path string, debugInfoDirs []string) error {-	return proc.PostInitializationSetup(p, path, debugInfoDirs, p.writeBreakpoint)+	return proc.PostInitializationSetup(p, path, debugInfoDirs, p.WriteBreakpointFn)

Since WriteBreakpointFn is now part of the Process interface there is no reason to pass it here.

derekparker

comment created time in 13 days

Pull request review commentgo-delve/delve

pkg/proc: Move breakpoint logic up to Target

 type Info interface { 	BinInfo() *BinaryInfo 	EntryPoint() (uint64, error) +	AdjustsPCAfterBreakpoint() bool

IMHO this is evidence that the code that this PR is unifying isn't actually common among different backends and shouldn't be unified.

I also don't think that having a variable called adjustPC and a method called AdjustsPCAfterBreakpoint have opposite meanings (AdjustsPCAfterBreakpoint returning true means we set adjustPC to false) is a good idea.

derekparker

comment created time in 13 days

Pull request review commentgo-delve/delve

pkg/proc: Move breakpoint logic up to Target

 type ProcessManipulation interface {  // BreakpointManipulation is an interface for managing breakpoints. type BreakpointManipulation interface {-	Breakpoints() *BreakpointMap-	SetBreakpoint(addr uint64, kind BreakpointKind, cond ast.Expr) (*Breakpoint, error)-	ClearBreakpoint(addr uint64) (*Breakpoint, error)-	ClearInternalBreakpoints() error+	WriteBreakpointFn(addr uint64) (string, int, *Function, []byte, error)

I'm going to insist that these internal functions are moved to an interface other than proc.Process that we don't embed inside Target. We had two "internal methods users of proc aren't supposed to call" before and now we have four.

derekparker

comment created time in 13 days

issue commentgo-delve/delve

Cannot set breakpoint with go1.14beta1 on windows

Support for Go 1.14 hasn't been merged in yet. This thing in particular was a bug in Go which would be fixed if you had built from master but regardless delve 1.3.0 will be seriously broken with go 1.14.

nd

comment created time in 13 days

issue commentgo-delve/delve

ARM support

It's because go get will retrieve the latest tagged version by default and 1.3.x do not have ARM support, there is no need to look further into this, it will be resolved when we tag 1.4.0, which hopefully is soon after Go 1.14 is released.

derekparker

comment created time in 14 days

create barnchaarzilli/delve

branch : cond-bench-broken

created branch time in 19 days

issue commentgo-delve/delve

Conditional breakpoints are slow

There's actually a paper describing that idea. There are two problems with that, the first one is that starting with Go 1.14 we can't inject code like that anymore, the second problem is that a 64bit absolute jump in amd64 takes up a massive 15 bytes so it won't fit over most instructions, which creates a lot of problems.

viktor-ferenczi

comment created time in 20 days

issue commentgo-delve/delve

Conditional breakpoints are slow

A couple of considerations after looking into this a bit. The process of:

  1. stopping at a breakpoint
  2. evaluating a simple condition
  3. resuming the target process

currently takes us 2.2ms (all measures taken on my laptop running linux). This is consistent with the observations on this message (it would take around 30s to do that 10000 times). Using a toy debugger to do the same thing takes 0.08ms, however that's unrealistic because it doesn't have to evaluate an expression and doesn't deal with multiple threads properly. A more fair comparison is gdb which (experimentally) takes 0.18ms to do it, or a little bit over 10x faster than delve.

I have a series of patches that takes our latency down from 2.2ms to 0.6ms, I think there's probably still some room. Doing this for Go is harder than doing this for C so a goal of 0.22ms latency is probably realistic.

Note that the original goal, with 10M iterations, would still take 30 minutes even with gdb.

viktor-ferenczi

comment created time in 20 days

created tagaarzilli/gdlv

tagv1.2.0

GUI frontend for Delve

created time in 21 days

push eventaarzilli/gdlv

a

commit sha c6ecd35a6e0673d8fe8911ac05313cac16c0fd6d

Version 1.2

view details

push time in 21 days

issue commentgo-delve/delve

close "ALSR" to make debug convenient

Go binaries are not position independent by default so ASLR doesn't do anything for them. Goroutine stack are allocated on the heap and I don't think there's a way to disable randomization of heap addresses (it would require deterministic scheduling of goroutines).

hengwu0

comment created time in 21 days

PR closed go-delve/delve

service: fix race condition on close
service: fix race condition on close

Fixes race condition between closing the listener and exiting that
causes an occasional panic on exit.

Fixes #1633

+6 -0

1 comment

1 changed file

aarzilli

pr closed time in 21 days

pull request commentgo-delve/delve

service: fix race condition on close

I don't think we need a test speci

aarzilli

comment created time in 21 days

PR opened go-delve/delve

service: fix race condition on close
service: fix race condition on close

Fixes race condition between closing the listener and exiting that
causes an occasional panic on exit.

Fixes #1633

+6 -0

0 comment

1 changed file

pr created time in 21 days

create barnchaarzilli/delve

branch : fix1633

created branch time in 21 days

pull request commentgo-delve/delve

Fix serious performance degradation when building stack traces

This PR adds a benchmark for conditional breakpoints and adds a serious performance degradation to stacktraces introduced by commit adb1746c6019cd2ac40e37bb00d71647245268a2.

This part of a larger series of optimizations that reduces the latency of a conditional breakpoint evaluation from 8ms to 0.9ms. The majority of the latency is caused by this particular bug (from 8ms down to 2.2ms).

aarzilli

comment created time in 22 days

PR opened go-delve/delve

Fix serious performance degradation when building stack traces
dwarf/reader: precalcStack does not need to read past the first entry

It was reading all the way to the end of the debug_info section,
slowing down stacktraces substantially.

Benchmark before:

BenchmarkConditionalBreakpoints-4   	       1	80344642562 ns/op

Benchmark after:

BenchmarkConditionalBreakpoints-4   	       1	22218288218 ns/op

i.e. a reduction of the cost of a breakpoint hit from 8ms to 2.2ms

Updates #1549

tests: add benchmark for conditional breakpoints

proc: remove CX method from proc.Registers

It is not used anymore besides internally by the proc/gdbserial
backend.

+36 -27

0 comment

9 changed files

pr created time in 22 days

create barnchaarzilli/delve

branch : reggie0

created branch time in 22 days

push eventaarzilli/delve

aarzilli

commit sha 1a57d4323f37611ba74eee468e0c679674729bb0

proc: cache result of GetG Benchmark before: BenchmarkConditionalBreakpoints-4 1 15929810602 ns/op Benchmark after: BenchmarkConditionalBreakpoints-4 1 11570508729 ns/op Conditional breakpoint evaluation 1.6ms -> 1.2ms Updates #1549

view details

aarzilli

commit sha db6877a66ca081200954a350355feba0fc2bb17e

proc: optimize RegistersToDwarfRegisters Benchmark before: BenchmarkConditionalBreakpoints-4 1 11570508729 ns/op Benchmark after: BenchmarkConditionalBreakpoints-4 1 10013510647 ns/op Conditional breakpoint evaluation 1.2ms -> 1.0ms Updates #1549

view details

aarzilli

commit sha da681c65d742ab2bfc16f0679cee997be1049dfd

proc: optimize parseG runtime.g is a large and growing struct, we only need a few fields. Instead of using loadValue to load the full contents of g cache its memory and then only load the fields we care about. Benchmark before: BenchmarkConditionalBreakpoints-4 1 10013510647 ns/op Benchmark after: BenchmarkConditionalBreakpoints-4 1 9330025748 ns/op Conditional breakpoint evaluation: 1.0ms -> 0.93ms Updates #1549

view details

push time in 22 days

push eventaarzilli/delve

aarzilli

commit sha cb82a3f95e3d5015a71b93bc0208f6beabeef937

proc,proc/*: move SelectedGoroutine to proc.Target moves SelectedGoroutine, SwitchThread and SwitchGoroutine to proc.Target, merges PostInitializationSetup with NewTarget.

view details

push time in 22 days

more