[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] Invalid P2M entries after gnttab unmap



On 03/04/2011 01:34 PM, Ian Campbell wrote:
> On Fri, 2011-03-04 at 17:02 +0000, Tim Deegan wrote:
>> Hi, 
>>
>> At 16:34 +0000 on 04 Mar (1299256499), Daniel De Graaf wrote:
>>> When an HVM guest uses gnttab_map_grant_ref to map granted on top of valid
>>> GFNs, it appears that the original MFNs referred to by these GFNs are lost.
>>
>> Yes.  The p2m table only holds one MFN for each PFN (and vice versa).
>> If you want to keep that memory you could move it somewhere else 
>> using XENMAPSPACE_gmfn,
> 
> In which case you might as well do the grant map to "somewhere else" I
> guess?

Yes, remapping seems to be useless unless there's a "somewhere else" that
isn't usable for normal memory access.

>>  or just map your grant refs into an MMIO hole. 
> 
> The platform-pci device has a BAR for this sort of purpose, doesn't it?
> Mostly it just gets used for the grant table itself and perhaps it isn't
> large enough to be a suitable source of mapping space.
> 
> Is there some reason the gntdev driver can't just balloon down
> (XENMEM_decrease_reservation) to make itself a space to map and then
> balloon back up (XENMEM_increase_reservation) after unmap when running
> HVM?

I recall having problems with doing this last time, but I think other changes
to make the balloon driver work in HVM may have fixed the issue. I'll try it
again; I think this would be the best solution.

It may be useful to integrate with the balloon driver and use some of the pages
that have already been ballooned down, rather than forcing the map to be a two-
step process. This would also make handling a failure to balloon back up far
easier. This would add a dependency on the balloon module and a new interface
that allows requesting/returning ballooned pages.

>>> in this case, perhaps half of the unmapped GFNs
>>> will point to valid memory, and half will point to invalid memory. In this
>>> case, "invalid memory" discards writes and returns 0xFF on all reads; valid
>>> memory appears to be normal RAM.
> 
> The workaround relies entirely on this discard and read 0xff behaviour,
> which I'm not sure is wise.
> 
> I'm not especially happy about the idea of 2.6.39 getting released into
> the wild with this hack in it. Luckily there is plenty of time to fix
> the issue properly before then.
> 
> Ian.
> 

Agreed, it would be best to avoid this workaround since the intended behavior
of unmapping is to produce invalid PFNs (I had written it assuming they were
intended to be valid). Using the ballooning hypercalls should fix this.

-- 
Daniel De Graaf
National Security Agency

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.