Blue Swirl wrote:
But the number is much higher. It's the limitation of the #pci devices.
On 1/11/09, Dor Laor <address@hidden> wrote:
Carl-Daniel Hailfinger wrote:
On 11.01.2009 08:10, Blue Swirl wrote:
On 1/11/09, Jamie Lokier <address@hidden> wrote:
> But we also have to think about how to support newer platforms and newer
> kernels and this will often mean that we have to make intrusive changes
> so that the integration makes everyone happy. This does not mean that
> we cannot support older platforms though, we just have to do it a little
> differently on the older platforms.
Sure, but don't make it _deliberately_ hard to support
older/obscure/can't-compile-a-kernel-module guests by
something that's obviously going to require a totally different
mechanism on those other guests. It would make it unnecessarily hard
to integrate diverse guests with management apps from different
authors if they do adopt the vmchannel mechanism.
I think a serial port device should be universally supported by any OS
and it's portable to many systems. Older OS may accidentally enable
forwarding between the trusted nic and other nics, this doesn't happen
with serial lines.
I remember the old days of DOS networking where the Kirschbaum-Netz
software provided some sort of routed/forwarded networking between PCs
over serial ports. It was a default on choice in many corporate and
private "LANs" in Germany at the beginning of the last decade.
Except for machines with that software (which is really hard to get
nowadays), my concern should be a non-issue, especially for virtual
machines which are unlikely to have such software installed.
Actually vmchannel started as a pv serial implementation. Standard serial
a bit low performing and demand an vmexit per byte (maybe it's not that bad
Moreover, various guest do not support more than 4 serial channels. Since
should be several channels and we like to preserve some for console/debug,
There could be similar OS limits for number of nics in the system.
It's logical, mainly for the serial.
Originally, vmchannel was a virtio interface with netlink interface to the
Then, Anthony asked to change it to a socket interface with new address
was indeed a logical step. Then, David Miller was reluctant to add such
interface to the
kernel. Instead, he offered the network device solution.
Are we close to begin this loop again? :)
I propose to make the vmchannel connect to any capable device (serial,
nic, usb, IO port, whatever) by adding some indirection. Moreover, at
start no device should be "vmchannel-enabled" but the connection could
be made dynamically at guest's request, then some of the disadvantages
you listed are gone.
My only fear is that too many options will confuse the users/developers.
The installer of the guest agent is responsible for punching a hole in
Let's try to stick to the nic solution. It has some advantages over pv
- Reliable communication if tcp is used
- Migration support for slirp
- No new driver in the guest.
- Might even work for older guests
The disadvantages are:
- Need to 'teach' guest daemons/firewalls not to handle/block the new
The guest could request a vmchannel only after ensuring that the
firewall is fixed..
It does check (meaning we need to fully implement the link local rfc).
- Link local addresses for ipv4 are problematic when using on other
nics in parallel
Likewise, the guest could check the address situation beforehand.
The problem is that even if we check that no one is using this guest
link address, another nic can use link local addresses. So a remote
the LAN of the other nic might chose the same address we are using.
- We should either 1. not use link local on other links 2. Use
standard dhcp addresses 3. do
not use tcp/ip for vmchannel communication.
So additional nic can do the job and we have several flavours to choose
The solution should be generic enough so that any nic can be connected