|
From: | Paolo Bonzini |
Subject: | Re: [PATCH 01/10] gdbstub: use preferred boolean option syntax |
Date: | Wed, 17 Feb 2021 17:38:20 +0100 |
User-agent: | Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101 Thunderbird/78.7.0 |
On 16/02/21 20:10, Daniel P. Berrangé wrote:
Signed-off-by: Daniel P. Berrangé <berrange@redhat.com> --- gdbstub.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/gdbstub.c b/gdbstub.c index 759bb00bcf..3ee40479b6 100644 --- a/gdbstub.c +++ b/gdbstub.c @@ -3505,7 +3505,7 @@ int gdbserver_start(const char *device) if (strstart(device, "tcp:", NULL)) { /* enforce required TCP attributes */ snprintf(gdbstub_device_name, sizeof(gdbstub_device_name), - "%s,nowait,nodelay,server", device); + "%s,wait=off,delay=off,server=on", device);
Uff, this is ugly... The option should have been named nodelay (for TCP_NDELAY) but it was inverted to make "nodelay" work. Should we do something like
diff --git a/chardev/char-socket.c b/chardev/char-socket.c index 9061981f6d..cb80af8d67 100644 --- a/chardev/char-socket.c +++ b/chardev/char-socket.c@@ -1469,8 +1469,8 @@ static void qemu_chr_parse_socket(QemuOpts *opts, ChardevBackend *backend,
sock = backend->u.socket.data = g_new0(ChardevSocket, 1); qemu_chr_parse_common(opts, qapi_ChardevSocket_base(sock)); - sock->has_nodelay = qemu_opt_get(opts, "delay"); - sock->nodelay = !qemu_opt_get_bool(opts, "delay", true);+ sock->has_nodelay = qemu_opt_get(opts, "delay") || qemu_opt_get(opts, "nodelay"); + sock->nodelay = !qemu_opt_get_bool(opts, "delay", true) || qemu_opt_get_bool(opts, "nodelay", false);
/* * We have different default to QMP for 'server', hence * we can't just check for existence of 'server' ? Paolo
device = gdbstub_device_name; } #ifndef _WIN32
[Prev in Thread] | Current Thread | [Next in Thread] |