[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [VirtIO] Support for various devices in Xen



On Fri, Apr 12, 2024 at 1:23 AM Stefano Stabellini
<sstabellini@xxxxxxxxxx> wrote:
>
> -Vikram +Edgar
>
> On Thu, 11 Apr 2024, Andrei Cherechesu wrote:
> > Hi Stefano, Vikram, Viresh,
> >
> > Thank you for your answers and support, and sorry for my late reply.
> >
> >
> > On 12/01/2024 02:56, Vikram Garhwal wrote:
> > > Hi Andrei & Stefano,
> > >
> > > Actually, QEMU patches are already upstreamed for virtio-blk and 
> > > virtio-net
> > > devices available in v8.2.0.
> > > For virtio with grants, the patches are WiP.
> > >
> > > On Xen side, we are yet to upstream xen-tools patches which basically 
> > > generate
> > > the right arguments when invoking QEMU.
> > > Here are down stream patches if you want:
> > > 1. 
> > > https://github.com/Xilinx/xen/commit/be35b46e907c7c78fd23888d837475eb28334638
> > > 2. For Virtio disk backend:
> > >     
> > > https://github.com/Xilinx/xen/commit/947280803294bbb963f428423f679d074c60d632
> > > 3. For Virtio-net:
> > >     
> > > https://github.com/Xilinx/xen/commit/32fcc702718591270e5c8928b7687d853249c882
> > > 4. For changing the machine name to Xenpvh(to align with QEMU changes):
> > >     
> > > https://github.com/Xilinx/xen/commit/5f669949c9ffdb1947cb47038956b5fb8eeb072a
> > >> The libxl changes are lagging behind a bit and you might have to use
> > >> device_model_args to enable virtio backends in QEMU.
> > > But QEMU 8.2.0 can still be used for virtio-net on ARM.
> > >
> > > @Andrei here is an example on how to use virtio-net with QEMU:
> > >     -device virtio-net-device,id=nic0,netdev=net0,mac=00:16:3e:4f:43:05 \
> > >     -netdev 
> > > type=tap,id=net0,ifname=vif1.0-emu,br=xenbr0,script=no,downscript=no\
> > >     -machine xenpvh
> > >
> > > Please make sure to use xenpvh as QEMU machine.
> >
> > I've managed to successfully get a DomU up and running with the rootfs 
> > based on virtio-blk. I'm running QEMU 8.2.1, Xen 4.18 + Vikram's downstream 
> > patches, Linux 6.6.12-rt, built through yocto with some changes to 
> > xen-tools and QEMU recipes.
> >
> > However, when also enabling PV networking through virtio-net, it seems that 
> > DomU cannot successfully boot. The device model args passed by xen-tools 
> > when invoking QEMU look exactly like what Vikram said they should.
> >
> > While executing `xl -v create ..` I can see some error regarding the device 
> > model crashing:
> >
> >         libxl: debug: libxl_exec.c:127:libxl_report_child_exitstatus: 
> > domain 1 device model (dying as expected) [300] died due to fatal signal 
> > Killed
> >
> > But the error is not fatal and the DomU spawn goes on, it boots but never 
> > reaches prompt. It seems that kernel crashes silently at some point. 
> > Though, the networking interface is present since udev tries to rename it 
> > right before boot hangs:
> >
> >         [    4.376715] vif vif-0 enX0: renamed from eth1
> >
> > Why would the QEMU DM process be killed, though? Invalid memory access?
> >
> > Here are the full logs for the "xl create" command [0] and for DomU's dmesg 
> > [1].
> > Any ideas as to why that might happen, some debugging insights, or maybe 
> > some configuration details I could have overlooked?
> >
> > Thank you very much for your help once again.

Hi Andrei,

I'll share some info about my setup:
I'm using:

Xen upstream/master + virtio patches that Vikram shared
Commit 63f66058b5 on this repo/branch:
https://github.com/edgarigl/xen/tree/edgar/virtio-base

QEMU 02e16ab9f4 upstream/master
Linux 09e5c48fea17 upstream/master (from March)
Yocto rootfs.

I had a look at your logs but I can't tell why it's failing on your side.
I've not tried using a virtio-blk as a rootfs on my side, perhaps related.
It would be useful to see a diff of your Xen tree compared to plain
4.18 or whatever base you've got.
You probably don't have
https://github.com/edgarigl/xen/commit/63f66058b508180107963ea37217bc88d813df8f
but if that was the problem, I'd thought virtio wouldn't work at all
with your kernel it could also be related.

My guest config looks like this:
name = "g0"
memory = 1024
vcpus = 1
kernel = "Image"
ramdisk = "core-image-minimal-qemuarm64.rootfs.cpio.gz"
extra = "root=/dev/ram0 console=ttyAMA0"
vif = [ 'model=virtio-net,type=ioemu,bridge=xenbr0' ]
disk = [ '/etc/xen/file.img,,xvda,backendtype=qdisk,specification=virtio' ]

xl launches QEMU with the following args:
/usr/bin/qemu-system-aarch64 -xen-domid 1 -no-shutdown -chardev
socket,id=libxl-cmd,path=/var/run/xen/qmp-libxl-1,server=on,wait=off
-mon chardev=libxl-cmd,mode=control -chardev
socket,id=libxenstat-cmd,path=/var/run/xen/qmp-libxenstat-1,server=on,wait=off
-mon chardev=libxenstat-cmd,mode=control -nodefaults -no-user-config
-xen-attach -name g0 -vnc none -display none -nographic -global
virtio-mmio.force-legacy=false -device
virtio-net-device,id=nic0,netdev=net0,mac=00:16:3e:13:86:9c,iommu_platform=on
-netdev type=tap,id=net0,ifname=vif1.0-emu,br=xenbr0,script=no,downscript=no
-machine xenpvh -m 1024 -device
virtio-blk-device,drive=image,iommu_platform=on -drive
if=none,id=image,format=raw,file=/etc/xen/file.img -global
virtio-mmio.force-legacy=false

Cheers,
Edgar


>
> Edgar (CCed) has recently setup a working system with QEMU and the
> xenpvh machine for ARM. He should be able to help you.
>
> Cheers,
>
> Stefano
>
>
> > [0] 
> > https://privatebin.net/?0fc1db27433dbcb5#4twCBMayizr7x89pxPzNqQ198z92q8YxVheHvNDsVAtd
> > [1] 
> > https://privatebin.net/?ec3cb13fe2a086a1#F1zynLYQJCUDfZiwikZtRBEPJTACR2GZX6jn2ShXxmae
> > >> For SCMI, I'll let Bertrand (CCed) comment.
> > >>
> > >> Cheers,
> > >>
> > >> Stefano
> > >>
> > >>
> > >> On Thu, 11 Jan 2024, Andrei Cherechesu (OSS) wrote:
> > >>> Hello,
> > >>>
> > >>> As I've mentioned in previous discussion threads in the xen-devel
> > >>> community, we are running Xen 4.17 (uprev to 4.18 in progress) on NXP
> > >>> S32G automotive processors (Cortex-A53 cores) and we wanted to know more
> > >>> about the support for various VirtIO device types in Xen.
> > >>>
> > >>> In the Xen 4.17 release notes, the VirtIO standalone backends mentioned
> > >>> as supported and tested are: virtio-disk, virtio-net, virtio-i2c and
> > >>> virtio-gpio.
> > >>>
> > >>> However, we've only managed to successfully set up and try some
> > >>> use-cases with the virtio-disk standalone backend [0] (which Olexandr
> > >>> provided) based on the virtio-mmio transport.
> > >>>
> > >>> As such, we have a few questions, which we haven't been able to figure
> > >>> out from the mailing list discussions and/or code:
> > >>>     1. Are there any plans for the virtio-disk repo to have a stable
> > >>>     version? Is it going to be long-term hosted and maintained in the
> > >>>     xen-troops github repo? Or was it just an one-time PoC 
> > >>> implementation
> > >>>
> > >>>     and the strategy for future VirtIO devices will be based on a more 
> > >>> generic
> > >>>
> > >>>     approach (i.e., without need for a specific standalone app)?
> > >>>
> > >>>
> > >>>     2. With regards to the other backends, we want to try out and 
> > >>> provide PV
> > >>>
> > >>>     networking to a DomU based on virtio-net, but we haven't found any 
> > >>> available
> > >>>
> > >>>     resources for it (e.g., the standalone backend implementation if 
> > >>> needed for
> > >>>
> > >>>     control plane, configuration examples, presentations, demos, docs). 
> > >>> Does it
> > >>>
> > >>>     rely on the QEMU virtio-net or vhost implementation? Are there any 
> > >>> examples
> > >>>
> > >>>     on how to set it up? Any required Xen/Linux Kernel/QEMU versions?
> > >>>
> > >>>
> > >>>     3. What other VirtIO device types are there planned to be supported 
> > >>> in Xen?
> > >>>
> > >>>     I'm supposing libxl will also need changes to accomodate new 
> > >>> configuration
> > >>>
> > >>>     parameters for each of them. Or is there something I'm missing?
> > >>>
> > >>>
> > >>>     4. Also, while we're at it, are there any plans regarding SCMI
> > >>>     awareness for Xen (e.g., SCMI Mediator - where the RFC thread from 
> > >>> 2022
> > >>>
> > >>>     seems discontinued)? Or is the preferred approach for sharing SCMI 
> > >>> access
> > >>>
> > >>>     to guests through virtio-scmi?
> > >>>
> > >>> Thank you very much for the support, once again, and we're also looking
> > >>> forward to the progress on the rust-vmm initiative.
> > >>>
> > >>> Regards,
> > >>> Andrei Cherechesu,
> > >>> NXP Semiconductors
> > >>>
> > >>> [0] https://github.com/xen-troops/virtio-disk
> > >>>
> > >>>
> > >>>
> >



 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.