[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[Xen-devel] RE: [PATCH v2.0 0/6] Add memory add support to Xen




Keir Fraser wrote:
> On 10/07/2009 08:16, "Jan Beulich" <JBeulich@xxxxxxxxxx> wrote:
> 
>> There's one other problem with this overall change: Non-pv-ops pv
>> Linux guests (all versions afaict) establish an upper bound on the
>> m2p table size during early boot, and use this to bound check MFNs
>> before accessing the array (see the setup and use of
>> machine_to_phys_order). Hence, when you grow the m2p table, you
>> might need to send some sort of notification to all pv domains so
>> that they can adjust that upper bound. If not a notification, some
>> other communication mechanism will be needed (i.e. a new ELF note).
>> Hot-added memory must never be made visible to a pv guest not
>> supporting this new protocol (in particular, hot add may need to be
>> disabled altogether if Dom0 doesn't support it). 
> 
> The correct answer I think is for Xen to specify a
> machine_to_phys order
> that corresponds to the size of the M2P 'hole' rather than to the
> actual amount of memory currently populated on this host. The extra
> inefficiency is only that some I/O MFNs may be detected via fault
> rather than out-of-bounds check (and then probably only on systems
> with <4G RAM). 
> 
> This for x86/64 guests of course. We already established that compat
> guests and memory add are going to have lesser mutual support.
> 
> -- Keir

 I checked this before and I thought it is ok.
Currently the machine_to_phys_order is caculated based on return value of 
XENMEM_machphys_mapping. For both x86_32 and non-compat x86_64, this size will 
not be adjusted dynamically, so it is ok (it will cover the whole possible 
range).
The only issue is for compatible domain. For compatible domain, the value 
returned in XENMEM_machphys_mapping is adjusted (i.e. 
MACH2PHYS_COMPAT_VIRT_START(d)). However, domain_clamp_alloc_bitsize() in 
domainheap allocator will make sure the hot-added memory will not be assigned 
to the guest.

Did I miss-understand something?

Thanks
Yunhong Jiang

> 
>> As to pv-ops currently not being affected by this - the respective
>> check currently sits in an #if 0 conditional, but certainly this is
>> a latent bug (becoming a real one as soon as Dom0 or device
>> pass-through come into the picture): Since without the check
>> unbounded MFNs can be used to index into the array, it is possible
>> to access I/O memory here, so simply being prepared to handle a
>> fault resulting from an out-of-bounds access isn't enough. The
>> minimally required boundary check is to make sure the resulting
>> address is still inside hypervisor space (under the assumption that
>> the hypervisor will itself never make I/O memory addressable for the
>> guest).  
_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.