qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [PATCH v3] async: aio_context_new(): Handle event_notif


From: Chrysostomos Nanakos
Subject: Re: [Qemu-devel] [PATCH v3] async: aio_context_new(): Handle event_notifier_init failure
Date: Wed, 17 Sep 2014 12:16:05 +0300
User-agent: Mozilla/5.0 (X11; Linux x86_64; rv:24.0) Gecko/20100101 Icedove/24.7.0

On 09/16/2014 10:40 PM, Eric Blake wrote:
On 09/16/2014 12:04 PM, Chrysostomos Nanakos wrote:
If event_notifier_init fails QEMU exits without printing
any error information to the user. This commit adds an error
message on failure:

  # qemu [...]
Showing the actual command line you used would be helpful.

The problem raised after having a system with a low limit of open files and QEMU with 8 iothread objects. Do you believe that we should add such a command line in the commit description? The problem can be easily reproduced with any combination of low limit of open files and iothread objects or even a low limit of open files without the creation of iothreads.


  qemu: Failed to initialize event notifier: Too many open files in system

Signed-off-by: Chrysostomos Nanakos <address@hidden>
---
  async.c                  |   16 +++++++++++-----
  include/block/aio.h      |    2 +-
  include/qemu/main-loop.h |    2 +-
  iothread.c               |   11 ++++++++++-
  main-loop.c              |    9 +++++++--
  qemu-img.c               |    8 +++++++-
  qemu-io.c                |    7 ++++++-
  qemu-nbd.c               |    6 +++++-
  tests/test-aio.c         |   10 +++++++++-
  tests/test-thread-pool.c |   10 +++++++++-
  tests/test-throttle.c    |   10 +++++++++-
  vl.c                     |    5 +++--
  12 files changed, 78 insertions(+), 18 deletions(-)

-AioContext *aio_context_new(void)
+AioContext *aio_context_new(Error **errp)
  {
+    int ret;
      AioContext *ctx;
      ctx = (AioContext *) g_source_new(&aio_source_funcs, sizeof(AioContext));
+    ret = event_notifier_init(&ctx->notifier, false);
+    if (ret < 0) {
+        g_source_destroy(&ctx->source);
Does g_source_destroy() guarantee that errno is unmolested? If not,

+        error_setg_errno(errp, -ret, "Failed to initialize event notifier");
then this logs the wrong error. Swap the lines to be safe.

Good catch! I believe that Benoit has covered this point.


+        return NULL;
+    }
+    aio_set_event_notifier(ctx, &ctx->notifier,
+                           (EventNotifierHandler *)
+                           event_notifier_test_and_clear);
      ctx->pollfds = g_array_new(FALSE, FALSE, sizeof(GPollFD));
      ctx->thread_pool = NULL;
      qemu_mutex_init(&ctx->bh_lock);
      rfifolock_init(&ctx->lock, aio_rfifolock_cb, ctx);
-    event_notifier_init(&ctx->notifier, false);
Is hoisting the event notifier init to occur before the mutex init going
to cause any grief?

No I can't find any problem, none of the functions use the mutex on initialization. The mutex is being used for adding/removing BH's to/from the BH list. So it's safe at the moment to init mutex after the initialization of the event notifier.


+++ b/main-loop.c
@@ -138,8 +139,12 @@ int qemu_init_main_loop(void)
          return ret;
      }
+ qemu_aio_context = aio_context_new(&local_error);
+    if (!qemu_aio_context) {
+        error_propagate(errp, local_error);
+        return -1;
+    }
Can the earlier call to qemu_signal_init() consume any resources that
are now leaked?
I can only see signal set manipulation and the setup of a signal handler. Nothing is leaked here.


@@ -205,10 +206,17 @@ static void test_cancel(void)
  int main(int argc, char **argv)
  {
      int ret;
+    Error *local_error;
init_clocks(); - ctx = aio_context_new();
+    ctx = aio_context_new(&local_error);
+    if (!ctx) {
+        error_report("Failed to create AIO Context: \'%s\'",
Use of \' inside "" is unusual. (Multiple times in your patch)

I will fix that and if everyone agrees I will resend the patch.

Regards,
Chrysostomos.



reply via email to

[Prev in Thread] Current Thread [Next in Thread]