qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH v11 19/19] multi-process: add configure and usage information


From: Philippe Mathieu-Daudé
Subject: Re: [PATCH v11 19/19] multi-process: add configure and usage information
Date: Wed, 4 Nov 2020 19:39:29 +0100
User-agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101 Thunderbird/78.3.1

Hi Jagannathan,

On 10/15/20 8:05 PM, Jagannathan Raman wrote:
> From: Elena Ufimtseva <elena.ufimtseva@oracle.com>

Documentation is scarce ;)

> 
> Signed-off-by: Elena Ufimtseva <elena.ufimtseva@oracle.com>
> Signed-off-by: Jagannathan Raman <jag.raman@oracle.com>
> Signed-off-by: John G Johnson <john.g.johnson@oracle.com>
> Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com>
> ---
>  MAINTAINERS                |  2 ++
>  docs/multi-process.rst     | 67 
> ++++++++++++++++++++++++++++++++++++++++++++++
>  scripts/mpqemu-launcher.py | 49 +++++++++++++++++++++++++++++++++
>  3 files changed, 118 insertions(+)
>  create mode 100644 docs/multi-process.rst
>  create mode 100755 scripts/mpqemu-launcher.py
> 
> diff --git a/MAINTAINERS b/MAINTAINERS
> index 9a911e0..d12aba7 100644
> --- a/MAINTAINERS
> +++ b/MAINTAINERS
> @@ -3118,6 +3118,8 @@ F: include/hw/pci/memory-sync.h
>  F: hw/i386/remote-iohub.c
>  F: include/hw/i386/remote-iohub.h
>  F: docs/devel/multi-process.rst
> +F: scripts/mpqemu-launcher.py
> +F: scripts/mpqemu-launcher-perf-mode.py

This one was in v7, Stefan asked about it, then the script
disappeared in v8 =)
https://www.mail-archive.com/qemu-devel@nongnu.org/msg718984.html

>  
>  Build and test automation
>  -------------------------
> diff --git a/docs/multi-process.rst b/docs/multi-process.rst
> new file mode 100644
> index 0000000..c4b022c
> --- /dev/null
> +++ b/docs/multi-process.rst
> @@ -0,0 +1,67 @@
> +Multi-process QEMU
> +==================
> +
> +This document describes how to configure and use multi-process qemu.
> +For the design document refer to docs/devel/qemu-multiprocess.
> +
> +1) Configuration
> +----------------
> +
> +To enable support for multi-process add --enable-mpqemu
> +to the list of options for the "configure" script.
> +
> +
> +2) Usage
> +--------
> +
> +Multi-process QEMU requires an orchestrator to launch. Please refer to a
> +light-weight python based orchestrator for mpqemu in
> +scripts/mpqemu-launcher.py to lauch QEMU in multi-process mode.
> +
> +Following is a description of command-line used to launch mpqemu.
> +
> +* Orchestrator:
> +
> +  - The Orchestrator creates a unix socketpair
> +
> +  - It launches the remote process and passes one of the
> +    sockets to it via command-line.
> +
> +  - It then launches QEMU and specifies the other socket as an option
> +    to the Proxy device object
> +
> +* Remote Process:
> +
> +  - QEMU can enter remote process mode by using the "remote" machine
> +    option.
> +
> +  - The orchestrator creates a "remote-object" with details about
> +    the device and the file descriptor for the device
> +
> +  - The remaining options are no different from how one launches QEMU with
> +    devices.
> +
> +  - Example command-line for the remote process is as follows:
> +
> +      /usr/bin/qemu-system-x86_64                                        \
> +      -machine x-remote                                                  \
> +      -device lsi53c895a,id=lsi0                                         \
> +      -drive id=drive_image2,file=/build/ol7-nvme-test-1.qcow2           \
> +      -device scsi-hd,id=drive2,drive=drive_image2,bus=lsi0.0,scsi-id=0  \
> +      -object x-remote-object,id=robj1,devid=lsi1,fd=4,
> +
> +* QEMU:
> +
> +  - Since parts of the RAM are shared between QEMU & remote process, a
> +    memory-backend-memfd is required to facilitate this, as follows:
> +
> +    -object memory-backend-memfd,id=mem,size=2G
> +
> +  - A "x-pci-proxy-dev" device is created for each of the PCI devices 
> emulated
> +    in the remote process. A "socket" sub-option specifies the other end of
> +    unix channel created by orchestrator. The "id" sub-option must be 
> specified
> +    and should be the same as the "id" specified for the remote PCI device
> +
> +  - Example commandline for QEMU is as follows:
> +
> +      -device x-pci-proxy-dev,id=lsi0,socket=3
> diff --git a/scripts/mpqemu-launcher.py b/scripts/mpqemu-launcher.py
> new file mode 100755
> index 0000000..6e0ef22
> --- /dev/null
> +++ b/scripts/mpqemu-launcher.py
> @@ -0,0 +1,49 @@
> +#!/usr/bin/env python3
> +import socket
> +import os
> +import subprocess
> +import time
> +
> +PROC_QEMU='/usr/bin/qemu-system-x86_64'

If this is a (multiarch) test, then ...

> +
> +proxy, remote = socket.socketpair(socket.AF_UNIX, socket.SOCK_STREAM)
> +
> +remote_cmd = [ PROC_QEMU,                                                    
>   \
> +               '-machine', 'x-remote',                                       
>   \
> +               '-device', 'lsi53c895a,id=lsi1',                              
>   \

... I'd move it to tests/integration/multiproc-x86-lsi53c895a.py ...

> +               '-drive', 
> 'id=drive_image1,file=/build/ol7-nvme-test-1.qcow2',  \

... use avocado.utils.vmimage (see tests/acceptance/boot_linux.py)
to download a prebuilt image, ...

> +               '-device', 'scsi-hd,id=drive1,drive=drive_image1,bus=lsi1.0,' 
>   \
> +                              'scsi-id=0',                                   
>   \
> +               '-object',                                                    
>   \
> +               
> 'x-remote-object,id=robj1,devid=lsi1,fd='+str(remote.fileno()), \
> +               '-nographic',                                                 
>   \
> +             ]
> +
> +proxy_cmd = [ PROC_QEMU,                                                     
>   \
> +              '-name', 'OL7.4',                                              
>   \
> +              '-machine', 'q35,accel=kvm',                                   
>   \
> +              '-smp', 'sockets=1,cores=1,threads=1',                         
>   \
> +              '-m', '2048',                                                  
>   \
> +              '-object', 'memory-backend-memfd,id=sysmem-file,size=2G',      
>   \
> +              '-numa', 'node,memdev=sysmem-file',                            
>   \
> +              '-device', 'virtio-scsi-pci,id=virtio_scsi_pci0',              
>   \
> +              '-drive', 'id=drive_image1,if=none,format=qcow2,'              
>   \
> +                            'file=/home/ol7-hdd-1.qcow2',                    
>   \
> +              '-device', 'scsi-hd,id=image1,drive=drive_image1,'             
>   \
> +                             'bus=virtio_scsi_pci0.0',                       
>   \
> +              '-boot', 'd',                                                  
>   \
> +              '-vnc', ':0',                                                  
>   \
> +              '-device', 'x-pci-proxy-dev,id=lsi1,fd='+str(proxy.fileno()),  
>   \
> +            ]
> +
> +
> +pid = os.fork();
> +
> +if pid:
> +    # In Proxy
> +    print('Launching QEMU with Proxy object');
> +    process = subprocess.Popen(proxy_cmd, pass_fds=[proxy.fileno()])
> +else:
> +    # In remote
> +    print('Launching Remote process');
> +    process = subprocess.Popen(remote_cmd, pass_fds=[remote.fileno()])
> 

... and do something within the guest to be sure MultiProc works :)

Regards,

Phil.




reply via email to

[Prev in Thread] Current Thread [Next in Thread]