[Top][All Lists]
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
Re: [Qemu-devel] Re: [PATCH 3/3] vnc: threaded VNC server
From: |
Corentin Chary |
Subject: |
Re: [Qemu-devel] Re: [PATCH 3/3] vnc: threaded VNC server |
Date: |
Thu, 3 Jun 2010 10:26:12 +0200 |
On Thu, Jun 3, 2010 at 9:55 AM, Paolo Bonzini <address@hidden> wrote:
> On 05/29/2010 09:38 AM, Corentin Chary wrote:
>>
>> Implement a threaded VNC server using the producer-consumer model.
>> The main thread will push encoding jobs (a list a rectangles to update)
>> in a queue, and the VNC worker thread will consume that queue and send
>> framebuffer updates to the output buffer.
>>
>> There is three levels of locking:
>> - jobs queue lock: for each operation on the queue (push, pop, isEmpty?)
>> - VncState global lock: mainly used for framebuffer updates to avoid
>> screen corruption if the framebuffer is updated
>> while the worker threaded is doing something.
>> - VncState::output lock: used to make sure the output buffer is not
>> corrupted
>> if two threads try to write on it at the same time
>>
>> While the VNC worker thread is working, the VncState global lock is hold
>> to avoid screen corruptions (this block vnc_refresh() for a short time)
>> but the
>> output lock is not hold because the thread work on its own output buffer.
>> When
>> the encoding job is done, the worker thread will hold the output lock and
>> copy
>> its output buffer in vs->output.
>
> This belong in a comment in the code, not in the commit message (or in
> both).
Right
>> +void vnc_job_push(VncJob *job)
>> +{
>> + vnc_lock_queue(queue);
>> + if (QLIST_EMPTY(&job->rectangles)) {
>> + qemu_free(job);
>
> No need to lock if you get into the "then" block.
I locked it because the main thread can try to push a job while a
consumer is removing one, so I can't call QLIST_EMPTY() without
locking the queue.
>> + } else {
>> + QTAILQ_INSERT_TAIL(&queue->jobs, job, next);
>> + qemu_cond_broadcast(&queue->cond);
>> + }
>> + vnc_unlock_queue(queue);
>> +}
>
> ...
>
>> +static int vnc_worker_thread_loop(VncJobQueue *queue)
>> +{
>> + VncJob *job;
>> + VncRectEntry *entry, *tmp;
>> + VncState vs;
>> + int n_rectangles;
>> + int saved_offset;
>> +
>> + vnc_lock_queue(queue);
>> + if (QTAILQ_EMPTY(&queue->jobs)) {
>> + qemu_cond_wait(&queue->cond,&queue->mutex);
>> + }
>> +
>> + /* If the queue is empty, it's an exit order */
>> + if (QTAILQ_EMPTY(&queue->jobs)) {
>> + vnc_unlock_queue(queue);
>> + return -1;
>> + }
>
> This is not safe. It might work with a single consumer, but something like
> this is better:
>
> vnc_lock_queue(queue);
> while (!queue->exit && QTAILQ_EMPTY(&queue->jobs)) {
> qemu_cond_wait(&queue->cond,&queue->mutex);
> }
> if (queue->exit) {
> vnc_unlock_queue(queue);
> return -1;
> }
Right,
> (It occurred to me now that maybe you can reuse ->aborting. Not sure
> though).
>
>> + qemu_mutex_unlock(&job->vs->output_mutex);
>> +
>> + if (job->vs->csock != -1 && job->vs->abording != true) {
>> + vnc_flush(job->vs);
>> + }
>> +
>
> You're accessing the abort flag outside the mutex here. Also, you are not
> using vnc_{,un}lock_output.
I assumed that bool (int) where atomic .. but you're right I should lock that.
>> + job = QTAILQ_FIRST(&queue->jobs);
>> + vnc_unlock_queue(queue);
>
> ...
>
>> +static void vnc_abord_display_jobs(VncDisplay *vd)
>> +{
>> + VncState *vs;
>> +
>> + QTAILQ_FOREACH(vs, &vd->clients, next) {
>> + vnc_lock_output(vs);
>> + vs->abording = true;
>> + vnc_unlock_output(vs);
>> + }
>> + QTAILQ_FOREACH(vs, &vd->clients, next) {
>> + vnc_jobs_join(vs);
>> + }
>> + QTAILQ_FOREACH(vs, &vd->clients, next) {
>> + vnc_lock_output(vs);
>> + vs->abording = false;
>> + vnc_unlock_output(vs);
>> + }
>> +}
>
> It's "abort" not "abord". :-)
Ooops ...
> ...
>
>> static void vnc_disconnect_finish(VncState *vs)
>> {
>> + vnc_jobs_join(vs); /* Wait encoding jobs */
>> + vnc_lock(vs);
>
> Possibly racy? Maybe you have to set the aforementioned new flag
> queue->exit at the beginning of vnc_jobs_join, and refuse new jobs if it is
> set.
>
> Also, if anything waits on the same vs in vnc_refresh while you own it in
> vnc_disconnect_finish, as soon as you unlock they'll have a dangling
> pointer. (After you unlock the mutex the OS wakes the thread, but then
> pthread_mutex_lock has to check again that no one got the lock in the
> meanwhile; so QTAILQ_FOREACH_SAFE is not protecting you). Probably it's
> better to use a single lock on vd->clients instead of one lock per VncState.
vnc_disconnect_finish can only be called by the main thread, I don't
see how this could be racy, any hint ?
I am missing something ?
>> +void vnc_client_write(void *opaque)
>> +{
>> + VncState *vs = opaque;
>> +
>> + vnc_lock_output(vs);
>> + if (vs->output.offset) {
>> + vnc_client_write_locked(opaque);
>> + } else {
>> + qemu_set_fd_handler2(vs->csock, NULL, vnc_client_read, NULL, vs);
>> + }
>
> Why the if? The "else" branch is already done by vnc_client_write_plain.
This is because the vnc_write fd handler can be set by the thread, and
this can end up calling vnc_client_write_plain
with vs->output.offset == 0 and disconnecting.
> This may be a good time to port qemu-threads to Windows too. IO thread has
> no hope to work under Windows at least without major hacks (because Windows
> has no asynchronous interrupts; the only way I can imagine to emulate them
> is a breakpoint) but threaded VNC should work.
Right, but I won't do that since I don't have Windows :).
--
Corentin Chary
http://xf.iksaif.net