[Top][All Lists]

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [PATCH v3] fsdev: add IO throttle support to fsdev devi

From: Pradeep Jagadeesh
Subject: Re: [Qemu-devel] [PATCH v3] fsdev: add IO throttle support to fsdev devices
Date: Wed, 19 Oct 2016 10:57:14 +0200
User-agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:45.0) Gecko/20100101 Thunderbird/45.3.0

On 10/18/2016 6:19 PM, Alberto Garcia wrote:
On Mon 17 Oct 2016 11:19:11 AM CEST, Pradeep Jagadeesh wrote:

+typedef struct FsThrottle {
+    ThrottleState ts;
+    ThrottleTimers tt;
+    AioContext   *aioctx;
+    ThrottleConfig cfg;
+    bool enabled;
+    CoQueue      throttled_reqs[2];
+    unsigned     pending_reqs[2];
+    bool any_timer_armed[2];
+    QemuMutex lock;
+} FsThrottle;

You based your implementation on block/throttle-groups.c. I think
yours can be made simpler because one of the problems with mine is
that it needs to support multiple parallel I/O operations on the same
throttling group, and that's why the locking rules are more
complex. With a single user per ThrottleConfig this is not necessary.

Have you checked if you really need them? My impression is that you
might be able to get rid of the 'lock', 'any_timer_armed' and
'pending_reqs' fields.

Please check commit 76f4afb40fa076ed23fe0ab42c7a768ddb71123f to see
how was the transition from single-drive throttling to group
throttling, specifically the bdrv_io_limits_intercept() function. You
will see that it was simpler.

I tried removing the lock, I got into rcu issues, and the qemu hangs.
Once I put them back, it works fine.

Did you figure out why the lock is needed in this case? In the case of
disk group throttling, it is because there is data shared by several
disks which may be running at the same time.

If I remove the locks and run the IO workloads. I am get a message, "main-loop: WARNING: I/O thread spun for 1000 iterations" and then the qemu hangs. Even the monitor does not respond. I found that similar issues faced by many people, also some fixes, none of them worked for me.



reply via email to

[Prev in Thread] Current Thread [Next in Thread]