[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [RFC PATCH 2/6] xen/public: arch-arm: reserve resources for virtio-pci
On 15.11.23 19:31, Julien Grall wrote: > Hi Oleksandr, Hello Julien > > On 15/11/2023 16:51, Oleksandr Tyshchenko wrote: >> >> >> On 15.11.23 14:33, Julien Grall wrote: >>> Hi, >> >> >> Hello Julien >> >> Let me please try to explain some bits. >> >> >>> >>> Thanks for adding support for virtio-pci in Xen. I have some questions. >>> >>> On 15/11/2023 11:26, Sergiy Kibrik wrote: >>>> From: Oleksandr Tyshchenko <oleksandr_tyshchenko@xxxxxxxx> >>>> >>>> In order to enable more use-cases such as having multiple >>>> device-models (Qemu) running in different backend domains which provide >>>> virtio-pci devices for the same guest, we allocate and expose one >>>> PCI host bridge for every virtio backend domain for that guest. >>> >>> OOI, why do you need to expose one PCI host bridge for every stubdomain? >>> >>> In fact looking at the next patch, it seems you are handling some of the >>> hostbridge request in Xen. This is adds a bit more confusion. >>> >>> I was expecting the virtual PCI device would be in the vPCI and each >>> Device emulator would advertise which BDF they are covering. >> >> >> This patch series only covers use-cases where the device emulator >> handles the *entire* PCI Host bridge and PCI (virtio-pci) devices behind >> it (i.e. Qemu). Also this patch series doesn't touch vPCI/PCI >> pass-through resources, handling, accounting, nothing. > > I understood you want to one Device Emulator to handle the entire PCI > host bridge. But... > > From the >> hypervisor we only need a help to intercept the config space accesses >> happen in a range [GUEST_VIRTIO_PCI_ECAM_BASE ... >> GUEST_VIRTIO_PCI_ECAM_BASE + GUEST_VIRTIO_PCI_TOTAL_ECAM_SIZE] and >> forward them to the linked device emulator (if any), that's all. > > ... I really don't see why you need to add code in Xen to trap the > region. If QEMU is dealing with the hostbridge, then it should be able > to register the MMIO region and then do the translation. Hmm, sounds surprising I would say. Are you saying that unmodified Qemu will work if we drop #5? I think this wants to be re-checked (@Sergiy can you please investigate?). If indeed so, than #5 will be dropped of course from the that series (I would say, postponed until more use-cases). > >> >> It is not possible (with current series) to run device emulators what >> emulate only separate PCI (virtio-pci) devices. For it to be possible, I >> think, much more changes are required than current patch series does. >> There at least should be special PCI Host bridge emulation in Xen (or >> reuse vPCI) for the integration. Also Xen should be in charge of forming >> resulting PCI interrupt based on each PCI device level signaling (if we >> use legacy interrupts), some kind of x86's XEN_DMOP_set_pci_intx_level, >> etc. Please note, I am not saying this is not possible in general, >> likely it is possible, but initial patch series doesn't cover these >> use-cases) >> >> We expose one PCI host bridge per virtio backend domain. This is a >> separate PCI host bridge to combine all virtio-pci devices running in >> the same backend domain (in the same device emulator currently). >> The examples: >> - if only one domain runs Qemu which servers virtio-blk, virtio-net, >> virtio-console devices for DomU - only single PCI Host bridge will be >> exposed for DomU >> - if we add another domain to run Qemu to serve additionally virtio-gpu, >> virtio-input and virtio-snd for the *same* DomU - we expose second PCI >> Host bridge for DomU >> >> I am afraid, we cannot end up exposing only single PCI Host bridge with >> current model (if we use device emulators running in different domains >> that handles the *entire* PCI Host bridges), this won't work. > > That makes sense and it is fine. But see above, I think only the #2 is > necessary for the hypervisor. Patch #5 should not be necessary at all. Good, it should be re-checked without #5 sure. > > [...] > >>>> Signed-off-by: Oleksandr Tyshchenko <oleksandr_tyshchenko@xxxxxxxx> >>>> Signed-off-by: Sergiy Kibrik <Sergiy_Kibrik@xxxxxxxx> >>>> --- >>>> xen/include/public/arch-arm.h | 21 +++++++++++++++++++++ >>>> 1 file changed, 21 insertions(+) >>>> >>>> diff --git a/xen/include/public/arch-arm.h >>>> b/xen/include/public/arch-arm.h >>>> index a25e87dbda..e6c9cd5335 100644 >>>> --- a/xen/include/public/arch-arm.h >>>> +++ b/xen/include/public/arch-arm.h >>>> @@ -466,6 +466,19 @@ typedef uint64_t xen_callback_t; >>>> #define GUEST_VPCI_MEM_ADDR >>>> xen_mk_ullong(0x23000000) >>>> #define GUEST_VPCI_MEM_SIZE >>>> xen_mk_ullong(0x10000000) >>>> +/* >>>> + * 16 MB is reserved for virtio-pci configuration space based on >>>> calculation >>>> + * 8 bridges * 2 buses x 32 devices x 8 functions x 4 KB = 16 MB >>> >>> Can you explain how youd ecided the "2"? >> >> good question, we have a limited free space available in memory layout >> (we had difficulties to find a suitable holes) also we don't expect a >> lot of virtio-pci devices, so "256" used vPCI would be too much. It was >> decided to reduce significantly, but select maximum to fit into free >> space, with having "2" buses we still fit into the chosen holes. > > If you don't expect a lot of virtio devices, then why do you need two > buses? Wouldn't one be sufficient? For now one would sufficient I think. @Sergiy if you reduce to a single bus here, don't forget to update "bus-range" property in device-tree node. > >> >> >>> >>>> + */ >>>> +#define GUEST_VIRTIO_PCI_ECAM_BASE xen_mk_ullong(0x33000000) >>>> +#define GUEST_VIRTIO_PCI_TOTAL_ECAM_SIZE xen_mk_ullong(0x01000000) >>>> +#define GUEST_VIRTIO_PCI_HOST_ECAM_SIZE xen_mk_ullong(0x00200000) >>>> + >>>> +/* 64 MB is reserved for virtio-pci memory */ >>>> +#define GUEST_VIRTIO_PCI_ADDR_TYPE_MEM xen_mk_ullong(0x02000000) >>>> +#define GUEST_VIRTIO_PCI_MEM_ADDR xen_mk_ullong(0x34000000) >>>> +#define GUEST_VIRTIO_PCI_MEM_SIZE xen_mk_ullong(0x04000000) >>>> + >>>> /* >>>> * 16MB == 4096 pages reserved for guest to use as a region to >>>> map its >>>> * grant table in. >>>> @@ -476,6 +489,11 @@ typedef uint64_t xen_callback_t; >>>> #define GUEST_MAGIC_BASE xen_mk_ullong(0x39000000) >>>> #define GUEST_MAGIC_SIZE xen_mk_ullong(0x01000000) >>>> +/* 64 MB is reserved for virtio-pci Prefetch memory */ >>> >>> This doesn't seem a lot depending on your use case. Can you details how >>> you can up with "64 MB"? >> >> the same calculation as it was done configuration space. It was observed >> that only 16K is used per virtio-pci device (maybe it can be bigger for >> usual PCI device, I don't know). Please look at the example of DomU log >> below (to strings that contain "*BAR 4: assigned*"): > > What about virtio-gpu? I would expect a bit more memory is necessary for > that use case. In the DomU log I provided during last response, virtio-gpu was also present among 5 virtio-pci devices and it worked at the runtime [ 0.474575] pci 0001:00:03.0: [1af4:1050] type 00 class 0x038000 [ 0.476534] pci 0001:00:03.0: reg 0x20: [mem 0x00000000-0x00003fff 64bit pref] .... [ 0.496656] pci 0001:00:03.0: BAR 4: assigned [mem 0x3a808000-0x3a80bfff 64bit pref] .... [ 0.550208] virtio-pci 0001:00:03.0: enabling device (0000 -> 0002) .... 0001:00:03.0 Display controller: Red Hat, Inc. Virtio GPU (rev 01) I guess, indeed it needs more memory, but this is related to I/O descriptors at the runtime that passed via virtqueue. > > Any case, what I am looking for is for some explanation in the commit > message of the limits. I don't particularly care about the exact limit > because this is not part of a stable ABI. ok, sure > > Cheers, >
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |