[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH v7 02/12] x86/mm: add HYPERVISOR_memory_op to acquire guest resources



>>> On 27.09.17 at 13:34, <andrew.cooper3@xxxxxxxxxx> wrote:
> On 26/09/17 13:49, Paul Durrant wrote:
>>> -----Original Message-----
>>> From: Jan Beulich [mailto:JBeulich@xxxxxxxx]
>>> Sent: 26 September 2017 13:35
>>> To: Andrew Cooper <Andrew.Cooper3@xxxxxxxxxx>; Paul Durrant
>>> <Paul.Durrant@xxxxxxxxxx>
>>> Cc: xen-devel@xxxxxxxxxxxxxxxxxxxx 
>>> Subject: RE: [PATCH v7 02/12] x86/mm: add HYPERVISOR_memory_op to
>>> acquire guest resources
>>>
>>>>>> On 26.09.17 at 14:20, <Paul.Durrant@xxxxxxxxxx> wrote:
>>>>>  -----Original Message-----
>>>>> From: Xen-devel [mailto:xen-devel-bounces@xxxxxxxxxxxxx] On Behalf Of
>>>>> Paul Durrant
>>>>> Sent: 25 September 2017 16:00
>>>>> To: 'Jan Beulich' <JBeulich@xxxxxxxx>
>>>>> Cc: Andrew Cooper <Andrew.Cooper3@xxxxxxxxxx>; xen-
>>>>> devel@xxxxxxxxxxxxxxxxxxxx 
>>>>> Subject: Re: [Xen-devel] [PATCH v7 02/12] x86/mm: add
>>>>> HYPERVISOR_memory_op to acquire guest resources
>>>>>
>>>>>> -----Original Message-----
>>>>>> From: Jan Beulich [mailto:JBeulich@xxxxxxxx]
>>>>>> Sent: 25 September 2017 15:23
>>>>>> To: Paul Durrant <Paul.Durrant@xxxxxxxxxx>
>>>>>> Cc: Andrew Cooper <Andrew.Cooper3@xxxxxxxxxx>; xen-
>>>>>> devel@xxxxxxxxxxxxxxxxxxxx 
>>>>>> Subject: Re: [PATCH v7 02/12] x86/mm: add HYPERVISOR_memory_op
>>> to
>>>>>> acquire guest resources
>>>>>>
>>>>>>>>> On 18.09.17 at 17:31, <paul.durrant@xxxxxxxxxx> wrote:
>>>>>>> Certain memory resources associated with a guest are not necessarily
>>>>>>> present in the guest P2M and so are not necessarily available to be
>>>>>>> foreign-mapped by a tools domain unless they are inserted, which
>>> risks
>>>>>>> shattering a super-page mapping.
>>>>>> Btw., I'm additionally having trouble seeing this shattering of a
>>>>>> superpage: For one, xc_core_arch_get_scratch_gpfn() could be
>>>>>> a little less simplistic. And then even with the currently chosen
>>>>>> value (outside of the range of valid GFNs at that point in time)
>>>>>> there shouldn't be a larger page to be shattered, as there should
>>>>>> be no mapping at all at that index. But perhaps I'm just blind and
>>>>>> don't see the obvious ...
>>>>> The shattering was Andrew's observation. Andrew, can you comment?
>>>>>
>>>> Andrew commented verbally on this. It's not actually a shattering as 
>>>> such...
>>>> The issue, apparently, is that adding the 4k grant table frame into the 
>>>> guest
>>>> p2m will potentially cause creation of all layers of page table but 
>>>> removing
>>>> it again will only remove the L1 entry. Thus it is no longer possible to 
>>>> use
>>>> a superpage for that mapping at any point subsequently.
>>> First of all - what would cause a mapping to appear at that slot (or in
>>> a range covering that slot).
> 
> ???
> 
> At the moment, the toolstack's *only* way of editing the grant table of
> an HVM guest is to add it into the p2m, map the gfn, write two values,
> and unmap it.  That is how a 4k mapping gets added, which forces an
> allocation or shattering to cause a L1 table to exist.
> 
> This is a kludge and is architecturally unclean.

Well, if the grant table related parts of series here was presented
to be simply cleaning up a kludge, I'd probably be fine. But so far
it has been claimed that there are other bad effects, besides this
just being (as I would call it) sub-optimal.

>>>  And then, while re-combining contiguous
>>> mappings indeed doesn't exist right now, replacing a non-leaf entry
>>> (page table) with a large page is very well supported (see e.g.
>>> ept_set_entry(), which even has a comment to that effect).
> 
> I don't see anything equivalent in the NPT or IOMMU logic.

Look for intermediate_entry in p2m_pt_set_entry(). In AMD
IOMMU code see iommu_merge_pages(). For VT-d I agree.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
https://lists.xen.org/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.