|
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-devel] [PATCH v2 09/12] x86/altp2m: add remaining support routines.
On 06/24/2015 06:46 AM, Andrew Cooper wrote:
> On 22/06/15 19:56, Ed White wrote:
>> Add the remaining routines required to support enabling the alternate
>> p2m functionality.
>>
>> Signed-off-by: Ed White <edmund.h.white@xxxxxxxxx>
>> ---
>> xen/arch/x86/hvm/hvm.c | 60 +++++-
>> xen/arch/x86/mm/hap/Makefile | 1 +
>> xen/arch/x86/mm/hap/altp2m_hap.c | 103 +++++++++
>> xen/arch/x86/mm/p2m-ept.c | 3 +
>> xen/arch/x86/mm/p2m.c | 405
>> ++++++++++++++++++++++++++++++++++++
>> xen/include/asm-x86/hvm/altp2mhvm.h | 4 +
>> xen/include/asm-x86/p2m.h | 33 +++
>> 7 files changed, 601 insertions(+), 8 deletions(-)
>> create mode 100644 xen/arch/x86/mm/hap/altp2m_hap.c
>>
>> diff --git a/xen/arch/x86/hvm/hvm.c b/xen/arch/x86/hvm/hvm.c
>> index d75c12d..b758ee1 100644
>> --- a/xen/arch/x86/hvm/hvm.c
>> +++ b/xen/arch/x86/hvm/hvm.c
>> @@ -2786,10 +2786,11 @@ int hvm_hap_nested_page_fault(paddr_t gpa, unsigned
>> long gla,
>> p2m_access_t p2ma;
>> mfn_t mfn;
>> struct vcpu *v = current;
>> - struct p2m_domain *p2m;
>> + struct p2m_domain *p2m, *hostp2m;
>> int rc, fall_through = 0, paged = 0;
>> int sharing_enomem = 0;
>> vm_event_request_t *req_ptr = NULL;
>> + int altp2m_active = 0;
>
> bool_t
>
>>
>> /* On Nested Virtualization, walk the guest page table.
>> * If this succeeds, all is fine.
>> @@ -2845,15 +2846,33 @@ int hvm_hap_nested_page_fault(paddr_t gpa, unsigned
>> long gla,
>> {
>> if ( !handle_mmio_with_translation(gla, gpa >> PAGE_SHIFT, npfec) )
>> hvm_inject_hw_exception(TRAP_gp_fault, 0);
>> - rc = 1;
>> - goto out;
>> + return 1;
>
> What is the justification for skipping the normal out: processing?
>
At one point in the development of this patch, I had some code after
the out label that assumed at least 1 p2m lock was held. I observed
that at the point above, none of the conditions that require extra
processing after out could be true, so to avoid even more complication
I made the above change. Since the change after out: was later factored
out, the above change is no longer relevant, but it remains true that
none of the conditions requiring extra out: processing can be true here.
>> }
>>
>> - p2m = p2m_get_hostp2m(v->domain);
>> - mfn = get_gfn_type_access(p2m, gfn, &p2mt, &p2ma,
>> + altp2m_active = altp2mhvm_active(v->domain);
>> +
>> + /* Take a lock on the host p2m speculatively, to avoid potential
>> + * locking order problems later and to handle unshare etc.
>> + */
>> + hostp2m = p2m_get_hostp2m(v->domain);
>> + mfn = get_gfn_type_access(hostp2m, gfn, &p2mt, &p2ma,
>> P2M_ALLOC | (npfec.write_access ? P2M_UNSHARE
>> : 0),
>> NULL);
>>
>> + if ( altp2m_active )
>> + {
>> + if ( altp2mhvm_hap_nested_page_fault(v, gpa, gla, npfec, &p2m) == 1
>> )
>> + {
>> + /* entry was lazily copied from host -- retry */
>> + __put_gfn(hostp2m, gfn);
>> + return 1;
>
> Again, please don't skip the out: processing.
Same thing. There is no possibility of extra out processing being
required. There is precedent for this: the MMIO bypass skips out
processing.
>> + if ( rv ) {
>
> Style (brace on new line)
>
>> + gdprintk(XENLOG_ERR,
>> + "failed to set entry for %#"PRIx64" -> %#"PRIx64"\n",
>
> It would be useful to know more information, (which altp2m), and to
> prefer gfn over gpa to avoid mixing unqualified linear and frame numbers.
Ack on both counts. I copied this from somewhere else, and in my
private branch I carry a patch which logs much more info.
>> +bool_t p2m_destroy_altp2m_by_id(struct domain *d, uint16_t idx)
>> +{
>> + struct p2m_domain *p2m;
>> + struct vcpu *curr = current;
>> + struct vcpu *v;
>> + bool_t rc = 0;
>> +
>> + if ( !idx || idx > MAX_ALTP2M )
>> + return rc;
>> +
>> + if ( curr->domain != d )
>> + domain_pause(d);
>> + else
>> + for_each_vcpu( d, v )
>> + if ( curr != v )
>> + vcpu_pause(v);
>
> This looks like some hoop jumping around the assertions in
> domain_pause() and vcpu_pause().
>
> We should probably have some new helpers where the domain needs to be
> paused, possibly while in context. The current domain/vcpu_pause() are
> almost always used where it is definitely not safe to pause in context,
> hence the assertions.
>
It is. I'd be happy to use new helpers, I don't feel qualified to
write them.
Ed
_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel
|
![]() |
Lists.xenproject.org is hosted with RackSpace, monitoring our |