qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [RFC] contrib: add vhost-user-sim


From: Johannes Berg
Subject: Re: [Qemu-devel] [RFC] contrib: add vhost-user-sim
Date: Mon, 23 Sep 2019 11:33:41 +0200
User-agent: Evolution 3.30.5 (3.30.5-1.fc29)

On Mon, 2019-09-23 at 10:25 +0100, Stefan Hajnoczi wrote:
> 
> According to unix(7):
> 
>   The addrlen argument that describes the enclosing sockaddr_un
>   structure should have a value of at least:
> 
>     offsetof(struct sockaddr_un, sun_path)+strlen(addr.sun_path)+1
> 
>   or, more simply, addrlen can be specified as sizeof(struct sockaddr_un).
> 
> Please either increase len by 1 or use sizeof(struct sockaddr_un).

Good catch, I actually realized this later as I was porting it elsewhere
but forgot to update this code.

> g_new0() cannot return NULL, please remove this if statement.

:-)

Shows I don't work with glib much ...

> > +static int full_read(int fd, void *_buf, size_t len)
> > +{
> > +    unsigned char *buf = _buf;
> > +    ssize_t ret;
> > +
> > +    do {
> > +        ret = read(fd, buf, len);
> > +        if (ret > 0) {
> > +            buf += ret;
> > +            len -= ret;
> > +        } else if (ret == 0) {
> > +            return 0;
> > +        } else {
> > +            return -errno;
> > +        }
> 
> Want to loop on EINTR?

Can we even get EINTR? We're not really expecting to deal with any
signals in this code, do we?

But I guess we can.

> > +    switch (msg->op) {
> > +    case UM_TIMETRAVEL_REQUEST:
> > +        if (calendar_entry_remove(&conn->entry)) {
> > +            conn->entry.time = conn->offset + msg->time;
> > +            calendar_entry_add(&conn->entry);
> > +            DPRINT(" %d | calendar entry added for %lld\n", conn->idx, 
> > msg->time);
> > +        } else {
> > +            conn->entry.time = conn->offset + msg->time;
> > +            DPRINT(" %d | calendar entry time updated for %lld\n", 
> > conn->idx, msg->time);
> > +        }
> 
> Just checking the expected semantics:
> 
> If the entry was already added, then we update the time and it stays
> scheduled.  If the entry was not already added then we just stash away
> the time but don't schedule it?

Right.

The first case is the "normally running" case, where the request is
coming in after UM_TIMETRAVEL_WAIT but before we sent it
UM_TIMETRAVEL_RUN, i.e. when the sender of the message got an interrupt
from elsewhere and needs to change the next runtime.

The second case is when it's requesting before it went into
UM_TIMETRAVEL_WAIT, in which case we just want the entry to the calendar
to be added for when we actually get _WAIT.

> Also, are the DPRINT() messages swapped in the if ... else ... bodies?
> They seem to be talking about the other case.

Or better rephrased I guess :)

> > diff --git a/include/standard-headers/linux/um_timetravel.h 
> > b/include/standard-headers/linux/um_timetravel.h
> > new file mode 100644
> > index 000000000000..3aaced426a92
> > --- /dev/null
> > +++ b/include/standard-headers/linux/um_timetravel.h
> > @@ -0,0 +1,107 @@
> 
> Please use scripts/update-linux-headers.sh to import this header file
> with the necessary conversions (e.g. #include <linux/types.h> ->
> #include "standard-headers/linux/types.h", __u64 -> uint64_t).

Hah, sure, wasn't aware of that.


Note, I'm not happy with this code at all, it deadlocks all the time.
Unfortunately I haven't really been able to figure out how to make glib
do what I wanted.

The first issue is that sometimes glib actually seems to reports an FD
as readable when it's not, so I even put them into non-blocking mode :(

The second is that I can't seem to understand how to do recursive
mainloops.

To really do this *well*, it should remain a single-threaded
application, but would need a hook like

run_mainloop_until_fd_readable(vdev->call_fd)

inside the libvhost-user.c code, and that's a bit ugly ... if I even
could figure out how to implement that in glib.

johannes




reply via email to

[Prev in Thread] Current Thread [Next in Thread]