[Top][All Lists]

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: GSoC project about virtualization using Hurd mechanisms

From: olafBuddenhagen
Subject: Re: GSoC project about virtualization using Hurd mechanisms
Date: Thu, 10 Apr 2008 09:57:35 +0200
User-agent: Mutt/1.5.17+20080114 (2008-01-14)


On Wed, Apr 09, 2008 at 12:49:28AM +0200, zhengda wrote:

> I read the code of pfinet, and find the implementation of TCP/IP stack
> is almost the same as the linux.

Indeed, it's the TCP/IP code from some older Linux version (2.0 I
think), only converted to run as a Hurd translator :-)

> pfinet uses libtrivfs to build the translator,

Right, libtrivfs is used for the basic setup, and to make the node look
like a file. The interesting part is the socket interface though, which
is not implemented by libtrivfs itself...

But well, that's not the topic at hand :-)

> The packet received by the driver is spread by the packet filter,
> which  specifies the destination of every copy of the packet. So I
> guess as long as the right packet filter is set, the kernel will  send
> the packet to pfinet.


> But, unfortunately, I cannot find any code in hurd which tries to set
> the packet filter.

It must be there somewhere in pfinet. From the followup mail I conclude
that you found it in the meantime?...

On Wed, Apr 09, 2008 at 11:45:15PM +0200, zhengda wrote:

> In hurd, pfinet registers a filter when it open the ethernet device
> (this ethernet device can be thought as a stub, right?), so it can
> receive the packet it wants from the network.

Sorry, no idea what you mean by "stub" here...

> pfinet calls  device_write() to send the packet to the packet. gnumach
> gets the packet from device_write(), and sends it to the  network,
> and, meanwhile, it sends one copy to the packet filter.

Yes, a patch to improve the packet filter code was included not so long
ago; IIRC filtering outgoing packets as well was one of the

> If we set the filter well, I believe the pfinet can get the copy. (But
> I  haven't gone through the code of the packet filter, so I'm not sure
> about it)

Sounds plausible :-)

> If it works, does it mean pfinet servers have already been able to
> communicate with each other?

Well, it doesn't work for me: I get "destination host unreachable"...

I suggest you try it yourself: Create a subhurd, set up pfinet in it
with an own IP address, verify that you can talk to both the main system
and the subhurd by using the respective IPs (you should be able to open
on ssh connection to either one for example), and finally test whether
you can talk from one to the other...

Perhaps it's necessary to do some additional setup in pfinet to make
sure the packet filter properly handles the outgoing packages?

If it could be made to work, it would certainly be the easiest solution,
though I'm not sure whether it would be the most elegant...

> If it doesn't work, I have another proposal: We reimplement the
> functions in ethernet.c, which sends the packet to  other pfinets.

You mean the pfinets would directly talk to each other? Well, I guess
that should be doable, though somehow it doesn't sound right... Could
you try to examine possible advantages and disadvantages of such a

> One solution is that every pfinet sends its packets to one process
> which  decides the destination for every packet. In this case, the
> extra  process just works like a virtual network driver.

Indeed, one possible approach I was vaguely contemplating was to have
such a server sitting between the kernel device and the network stacks,
and do routing between them -- serving as a hypervisor basically.

The interesting question is how to configure the routing. Having an
extra interface for that would be awkward, so probably it should mirror
the kernel packet filter somehow...

> The benefit of the solution is that we can do many things in the extra
> process to simulate the network.

Ah, so you don't want to touch the actual hardware device at all, but
rather create a purely virtual network? Well, that's certainly an
option, and maybe actually more useful for what you want to do -- but
less useful in general I believe... :-)


reply via email to

[Prev in Thread] Current Thread [Next in Thread]