[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: Uses of /hypervisor memory range (was: FreeBSD/Xen/ARM issues)
On Fri, May 14, 2021 at 09:32:10AM +0100, Julien Grall wrote: > On 14/05/2021 03:42, Elliott Mitchell wrote: > > > > Issue is what is the intended use of the memory range allocated to > > /hypervisor in the device-tree on ARM? What do the Xen developers plan > > for? What is expected? > > From docs/misc/arm/device-tree/guest.txt: > > " > - reg: specifies the base physical address and size of a region in > memory where the grant table should be mapped to, using an > HYPERVISOR_memory_op hypercall. The memory region is large enough to map > the whole grant table (it is larger or equal to > gnttab_max_grant_frames()). > This property is unnecessary when booting Dom0 using ACPI. > " > > Effectively, this is a known space in memory that is unallocated. Not > all the guests will use it if they have a better way to find unallocated > space. The use of "should" is generally considered strong encouragement to do so. A warning $something is lurking here and you may regret it if you recklessly disobey this without knowning what is going on behind the scenes. Whereas your language here suggests "can" is a better word since it is simply a random unused address range. > > Was the /hypervisor range intended *strictly* for mapping grant-tables? > > It was introduced to tell the OS a place where the grant-table could be > conveniently mapped. Yet this is strange. If any $random unused address range is acceptable, why bother suggesting a particular one? If this is really purely the OS's choice, why is Xen bothering to suggest a range at all? > > Was it intended for /hypervisor to grow over the > > years as hardware got cheaper? > I don't understand this question. Going to the trouble of suggesting a range points to something going on. I'm looking for an explanation since strange choices might hint at something unpleasant lurking below and I should watch where I step. > > Might it be better to deprecate the /hypervisor range and have domains > > allocate any available address space for foreign mappings? > > It may be easy for FreeBSD to find available address space but so far > this has not been the case in Linux (I haven't checked the latest > version though). > > To be clear, an OS is free to not use the range provided in /hypervisor > (maybe this is not clear enough in the spec?). This was mostly > introduced to overcome some issues we saw in Linux when Xen on Arm was > introduced. Mind if I paraphrase this? "this is a bring-up hack for Linux which hangs around since we haven't felt any pressure to fix the underlying Linux issue" Is that reasonable? > > Should the FreeBSD implementation be treating grant tables as distinct > > from other foreign mappings? > > Both require unallocated addres space to work. IIRC FreeBSD is able to > find unallocated space easily, so I would recommend to use it. That is supposed to be, but it appears there is presently a bug which has broken the functionality on ARM. As such, as a proper lazy developer if I can abuse the /hypervisor address range for all foreign mappings, I will. My feeling is one of two things should happen with the /hypervisor address range: 1> OSes could be encouraged to use it for all foreign mappings. The range should be dynamic in some fashion. There could be a handy way to allow configuring the amount of address space thus reserved. 2> The range should be declared deprecated. Everyone should be put on the same page that this was a quick hack for bringing up Xen/ARM/Linux, and really it shouldn't have escaped. > > (is treating them the same likely to > > induce buggy behavior on x86?) > > I will leave this answer to Roger. This was directed towards *you*. There is this thing here which looks kind of odd in a vaguely unpleasant way. I'm trying to figure out whether I should embrace it, versus running away. On Fri, May 14, 2021 at 12:07:53PM +0200, Roger Pau Monn?? wrote: > On Fri, May 14, 2021 at 09:32:10AM +0100, Julien Grall wrote: > > On 14/05/2021 03:42, Elliott Mitchell wrote: > > > Was it intended for the /hypervisor range to dynamically scale with the > > > size of the domain? > > As per above, this doesn't depend on the size of the domain. Instead, this > > depends on what sort of the backend will be present in the domain. > > It should instead scale based on the total memory on the system, ie: > if your hardware has 4GB of RAM the unpopulated range should at least > be: 4GB - memory of the current domain, so that it could map any > possible page assigned to a different domain (and even then I'm not > sure we shouldn't account for duplicated mappings). This would be approach #1 from above. Going fully in this direction seems reasonable if the entire Xen/ARM team is up for this approach. Otherwise approach #2 also seems reasonable. Problem is the current situation seems an unreasonable hybrid. > > > Should the FreeBSD implementation be treating grant tables as distinct > > > from other foreign mappings? > > > > Both require unallocated addres space to work. IIRC FreeBSD is able to find > > unallocated space easily, so I would recommend to use it. > > I agree. I think the main issue here is that there seems to be some > bug (or behavior not understood properly) with the resource manager > on Arm that returns an error when requesting a region anywhere in the > memory address space, ie: [0, ~0]. I'm pretty sure there IS a bug, somewhere. Question is whether it is in the ARM nexus code, versus the xenpv code. Thing is, as a lazy developer I would love to avoid the task of fully diagnosing the bug by using an alternative approach. Alas, the alternative approach may not be viable longer term at which point I want to force everyone to endure the hardship of getting this fully fixed. :-) -- (\___(\___(\______ --=> 8-) EHM <=-- ______/)___/)___/) \BS ( | ehem+sigmsg@xxxxxxx PGP 87145445 | ) / \_CS\ | _____ -O #include <stddisclaimer.h> O- _____ | / _/ 8A19\___\_|_/58D2 7E3D DDF4 7BA6 <-PGP-> 41D1 B375 37D0 8714\_|_/___/5445
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |