[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] RE: question about virtio-vsock on xen
Hi Oleksandr, > Subject: Re: question about virtio-vsock on xen > > > > On 23.02.24 23:42, Stefano Stabellini wrote: > > Hi Peng, > > Hello Peng, Stefano > > > > > > We haven't tried to setup virtio-vsock yet. > > > > In general, I am very supportive of using QEMU for virtio backends. We > > use QEMU to provide virtio-net, virtio-block, virtio-console and more. > > > > However, typically virtio-vsock comes into play for VM-to-VM > > communication, which is different. Going via QEMU in Dom0 just to have > > 1 VM communicate with another VM is not an ideal design: it adds > > latency and uses resources in Dom0 when actually we could do without it. > > > > A better model for VM-to-VM communication would be to have the VM talk > > to each other directly via grant table or pre-shared memory (see the > > static shared memory feature) or via Xen hypercalls (see Argo.) > > > > For a good Xen design, I think the virtio-vsock backend would need to > > be in Xen itself (the hypervisor). > > > > Of course that is more work and it doesn't help you with the specific > > question you had below :-) > > > > For that, I don't have a pointer to help you but maybe others in CC > > have. > > > Yes, I will try to provide some info ... > > > > > > Cheers, > > > > Stefano > > > > > > On Fri, 23 Feb 2024, Peng Fan wrote: > >> Hi All, > >> > >> Has anyone make virtio-vsock on xen work? My dm args as below: > >> > >> virtio = [ > >> > 'backend=0,type=virtio,device,transport=pci,bdf=05:00.0,backend_type=qem > u,grant_usage=true' > >> ] > >> device_model_args = [ > >> '-D', '/home/root/qemu_log.txt', > >> '-d', > >> 'trace:*vsock*,trace:*vhost*,trace:*virtio*,trace:*pci_update*,trace: > >> *pci_route*,trace:*handle_ioreq*,trace:*xen*', > >> '-device', > >> 'vhost-vsock-pci,iommu_platform=false,id=vhost-vsock-pci0,bus=pcie.0, > >> addr=5.0,guest-cid=3'] > >> > >> During my test, it always return failure in dom0 kernel in below code: > >> > >> vhost_transport_do_send_pkt { > >> ... > >> nbytes = copy_to_iter(hdr, sizeof(*hdr), &iov_iter); > >> if (nbytes != sizeof(*hdr)) { > >> vq_err(vq, "Faulted on copying pkt hdr %x %x > >> %x %px\n", nbytes, sizeof(*hdr), __builtin_object_size(hdr, 0), &iov_iter); > >> kfree_skb(skb); > >> break; > >> } > >> } > >> > >> I checked copy_to_iter, it is copy data to __user addr, but it never > >> pass, the copy to __user addr always return 0 bytes copied. > >> > >> The asm code "sttr x7, [x6]" will trigger data abort, the kernel will > >> run into do_page_fault, but lock_mm_and_find_vma report it is > >> VM_FAULT_BADMAP, that means the __user addr is not mapped, no vma > has this addr. > >> > >> I am not sure what may cause this. Appreciate if any comments. > > > ... Peng, we have vhost-vsock (and vhost-net) Xen PoC. Although it is non- > upstreamable in its current shape (based on old Linux version, requires some > rework and proper integration, most likely requires involving Qemu and > protocol changes to pass an additional info to vhost), it works with Linux > v5.10 + patched Qemu v7.0, so you can refer to the Yocto meta layer which > contains kernel patches for the details [1]. Thanks for the pointer, I am reading the code. > > In a nutshell, before accessing the guest data the host module needs to map > descriptors in virtio rings which contain either guest grant based DMA > addresses (by using Xen grant mappings) or guest pseudo-physical addresses > (by using Xen foreign mappings). After accessing the guest data the host > module needs to unmap them. Ok, I thought the current xen virtio code already map every ready. > > Also note, in that PoC the target mapping scheme is controlled via module > param and guest domain id is retrieved from the device-model specific part in > the Xenstore (so Qemu/protocol are unmodified). But you might want to look > at [2] as an example of vhost-user protocol changes how to pass that > additional info. Sure, thanks very much on the link. Giving a look. > > Hope that helps. Definitely. Thanks, Peng. > > [1] > https://eur01.safelinks.protection.outlook.com/?url=https%3A%2F%2Fgithub > .com%2Fxen-troops%2Fmeta-xt- > vhost%2Fcommits%2Fmain%2F&data=05%7C02%7Cpeng.fan%40nxp.com%7 > C56a4b63510da43a9ddf808dc356694fa%7C686ea1d3bc2b4c6fa92cd99c5c3 > 01635%7C0%7C0%7C638443961893736252%7CUnknown%7CTWFpbGZsb3 > d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0 > %3D%7C0%7C%7C%7C&sdata=hDZLtjhzHGHzvuzPmQLwoNI8mwKeZDtVWgn > VTF%2BX1TQ%3D&reserved=0 > [2] > https://eur01.safelinks.protection.outlook.com/?url=https%3A%2F%2Fwww. > mail-archive.com%2Fqemu- > devel%40nongnu.org%2Fmsg948327.html&data=05%7C02%7Cpeng.fan%40 > nxp.com%7C56a4b63510da43a9ddf808dc356694fa%7C686ea1d3bc2b4c6fa > 92cd99c5c301635%7C0%7C0%7C638443961893746102%7CUnknown%7CT > WFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLC > JXVCI6Mn0%3D%7C0%7C%7C%7C&sdata=ygBpkAS%2F1aCcyB20ymt%2BZ9jt > 9T1l%2F8hzPwRQrGb35jg%3D&reserved=0 > > P.S. May answer with a delay. > > > >> > >> BTW: I tested blk pci, it works, so the virtio pci should work on my setup. > >> > >> Thanks, > >> Peng. > >>
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |