|
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-devel] [RFC] [Draft Design] ACPI/IORT Support in Xen.
On 10/27/2017 7:35 PM, Andre Przywara wrote: Hi, Hey Andre, On 25/10/17 09:22, Manish Jaggi wrote:On 10/23/2017 7:27 PM, Andre Przywara wrote:Hi Manish, On 12/10/17 22:03, Manish Jaggi wrote: Sure will do that. Thanks for pointing that. We can have a IRC discussion on this. I think apart from rewriting, the other tasks which were required that are handled in this epic task - parse IORT and save in xen internal data structures - common code to generate IORT for dom0/domU - All xen code that parses IORT multiple times use now the xen internal data structures.Yes, that sounds about right. :) (I have explained this in this mail below)So to some degree your statements are true, but when we rewrite the IORT table without SMMUs (and possibly without other components like the PMUs), it would be kind of a stretch to call it "fairly similar to the host IORT". I think "based on the host IORT" would be more precise.Yes. Based on host IORT is better,thanks.4. IORT for DomU ----------------- IORT for DomU is generated by the toolstack. IORT topology is different when DomU supports device passthrough.Can you elaborate on that? Different compared to what? My understanding is that without device passthrough there would be no IORT in the first place?I was exploring the possibility of having virtual devices for DomU. So if a virtual is assigned to guest there needs to be some mapping in IORT as well. This virtual device can be on a PCI bus / or as a platform device. Device Pass-through can be split into two parts a. platform device passthrough (not on PCI bus) b. PCI device PTI understand that, but am still wondering how it would be "different". We just start with creating our mapping data structure *from scratch*, the same one we generate by *parsing* the host IORT. Whether this points to a purely virtual device, a PCI PT or a platform PT, should not matter for this purpose. I rest my case till I can cite a valid example :) => If we discount the possibility of a virtual device for domU and platform device passthrough then you are correct no IORT is required.I believe we need an IORT once we have devices which use MSIs. yes. When PCI device passthrough is supported, the PCIRC is itself virtual (emulated by Xen). One can have any number of virtual PCIRC and may be virtual SMMUs. Hence the topology can vary.I think I don't disagree, my initial comment was just about the confusion that this "IORT topology is *different* from" term created. Ok, I will move it in a different section and remove the term "different". Now read the below lines.At a minimum domU IORT should include a single PCIRC and ITS Group. Similar PCIRC can be added in DSDT. Additional node can be added if platform device is assigned to domU. No extra node should be required for PCI device pass-through.Again I don't fully understand this last sentence.The last line is continuation of the first line "At a minimum..."OK, but still I don't get how we would end up with an IORT without (pass-throughed) PCI devices in the first place? If hypothetically a platform device uses MSI. I will let Sameer comment on it. Our platform does not have a Named Component node in IORT. It is proposed that the idrange of PCIRC and ITS group be constant for domUs."constant" is a bit confusing here. Maybe "arbitrary", "from scratch" or "independent from the actual h/w"?ok. that is implementation defined.In case if PCI PT,using a domctl toolstack can communicate physical RID: virtual RID, deviceID: virtual deviceID to xen. It is assumed that domU PCI Config access would be trapped in Xen. The RID at which assigned device is enumerated would be the one provided by the domctl, domctl_set_deviceid_mapping TODO: device assign domctl i/f. Note: This should suffice the virtual deviceID support pointed by Andre. [4]Well, there's more to it. First thing: while I tried to include virtual ITS deviceIDs to be different from physical ones, in the moment there are fixed to being mapped 1:1 in the code.ohSo the first step would be to go over the ITS code and identify where "devid" refers to a virtual deviceID and where to a physical one (probably renaming them accordingly). Then we would need a function to translate between the two. At the moment this would be a dummy function (just return the input value). Later we would loop in the actual table.Some thought here.. Wouldn't it be better to call a helper function to translate the devid coming from guest. The helper function would look at the table created by handling successive domctls (the one mentioned here)Exactly. Thanks There are few cases which spec supports but are not found in present day hardware.We might not need this domctl if assign_device hypercall is extended to provide this information.Do we actually need a new interface or even extend the existing one? If I got Julien correctly, the existing interface is just fine?Could you explain which existing interface can be used to translate guest device ID to host device ID when an ITS command gets trapped in Xen. may be I am missing something here.I haven't looked in detail, but will do. for instancea. Spec allows having 2 PCI_RC behind same SMMU and the SMMU behind single ITS. How would you map back ITS deviceID to PCI_RC.b. Similarly if simplified list RC->ITS is created (which is infact similar to IORT for Dom0, hiding smmu) how would you know which SMMU the device is on ? So you have to lookup another list which is RC->SMMU Yes, but I have a context which points to IORT node so not everything is replicated.Each PCIRC node can have an array of idmaps, each id map entry can have a multiple of idmap entries in the output reference smmu idmap. I had a similar discussion with the v1 version of my IORT SMMU hidepatch with julien. Moreover I dont quite understand the where iort_id_map would fit. As if you see below reply, we have a other things to take care as wellSo parsing the IORT would create and fill a list of those structures. For a lookup we would just iterate over that list, find a matching entry and: return (input_id - match->input_range.base) + match->its_devid_base; Ideally we abstract this via some functions, so that we can later swap this for more efficient data structures should the need arise.This structure is created at the point IORT table is parsed say from acpi_iort_init. It is proposed to use this structure information in iort_init_platform_devices. [2] [RFC v2 4/7] ACPI: arm: Support for IORT How are pointers complicating things ? In which case do you think there would be extra pointer handling. I was infact planning to add a back pointer so that given an ITS id we can trace back to SMMU and PCI_RC.with the pointering complicating things. I believe what we need is: 1) a mapping from a PCI-RC or PT-NC to stream IDs, for programming the SMMU in Xen 2) a mapping from a PCI-RC or PT-NC to ITS devIDs, for programming the ITS when being asked for by a guest (incl. Dom0) I think that the IORT is a streamlined and optimized representation of those mappings, which we don't necessarily need to replicate 1:1 in an in-memory data structure. We might replicate the skeleton as 1:1. As I explained above that we inadvertently would be parsing multiple lists. Keeping PCI_RC->ITS mapping and PCI_RC -> SMMU mapping might not be sufficient for all the casesBut admittedly I haven't looked with too much details into this, so if you convince me that we need this graph structure, then so be it. spec supports. Keeping a graph structure all cases can be handled. What do you think about this difference in basic structures in toolstack and xen code. When we write a common library should I include a #define for mapping xen structure to toolstack. Would it have more overhead than duplication, that is an implementation issue.=> For that reason [2]/[5] might need to be rebased on this task's patch. <= [5] https://www.mail-archive.com/xen-devel@xxxxxxxxxxxxx/msg123080.html b. Generate IORT for Doms without patching Host IORT, rather regenerate from xen internal data structures. based on this rationale, I think the data structures mentioned would be required.6. IORT Generation ------------------- There would be a common code to generate IORT table from iort_table_struct.That sounds useful, but we would need to be careful with sharing code between Xen and the tool stack. Has this actually been done before?I added the code sharing part here, but I am not hopeful that this would work as it would require lot of code change on toolstack. A simple difference is that the acpi header structures have different member variables. This is same for other structures. So we might have to create a lot of defines in common code for sharing and possibility of errors. See: struct acpi_header in acpi2_0.h (tools/libacpi) and struct acpi_table_header in actbl.h (xen/include/acpi) That is why I preferred a domctl, so xen coud prepare IORT for DomU.I don't this it's justified to move a simple table generation task into Xen, just to allow code sharing. After all this does not require any Xen internal knowledge. So it should be done definitely in the toolstack. Yes. Fully agree. The point here is duplication or code reuse. See above. I think we should follow Julien's suggestion of looking at xen/common/libelf. ok Cheers, Andre.If not code sharing then code duplication might also work (In that case no domctl required) We can discuss on this more... _______________________________________________ Xen-devel mailing list Xen-devel@xxxxxxxxxxxxx https://lists.xen.org/xen-devel
|
![]() |
Lists.xenproject.org is hosted with RackSpace, monitoring our |