[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH v2 4/6] x86/NPT: don't walk entire page tables when changing types on a range



>>> On 24.04.14 at 18:25, <tim@xxxxxxx> wrote:
> At 13:30 +0100 on 22 Apr (1398169846), Jan Beulich wrote:
>> This builds on the fact that in order for no NPF VM exit to occur,
>> _PAGE_USER must always be set. I.e. by clearing the flag we can force a
>> VM exit allowing us to do similar lazy type changes as on EPT.
>> 
>> That way, the generic entry-wise code can go away, and we could remove
>> the range restriction in enforced on HVMOP_track_dirty_vram for XSA-27.
>> 
>> Signed-off-by: Jan Beulich <jbeulich@xxxxxxxx>
>> 
>> --- a/xen/arch/x86/hvm/svm/svm.c
>> +++ b/xen/arch/x86/hvm/svm/svm.c
>> @@ -2557,7 +2557,16 @@ void svm_vmexit_handler(struct cpu_user_
>>          perfc_incra(svmexits, VMEXIT_NPF_PERFC);
>>          if ( cpu_has_svm_decode )
>>              v->arch.hvm_svm.cached_insn_len = vmcb->guest_ins_len & 0xf;
>> -        svm_do_nested_pgfault(v, regs, vmcb->exitinfo1, vmcb->exitinfo2);
>> +        rc = p2m_npt_fault(vmcb->exitinfo2);
> 
> Can we limit the classes of fault that we need to check for
> recalc on?  e.g. if bit 0 of the error code is clear, then the page is
> not mapped at all. 

That's a good suggestion (albeit I'm not sure if there really are
frequent faults with P clear - with paging perhaps, but that's a slow
path anyway). Ideally we would have an indication that the fault
was because of the U bit clear, but sadly that doesn't exist.

> I'm unsure about having two different fixup paths here anyway -- one
> for log-dirty and one for everything else (PoD, sharing &c).  This
> call should probably go _inside_ svm_do_nested_pgfault() at least.

I intentionally kept it separate, so it's not getting farther away than
necessary from how EPT handling is structured.

> Also, maybe it wants a more descriptive name than p2m_npt_fault().
> p2m_pt_hnadle_misconfig(), to match the EPT equivalent?

I first considered naming it that way, but it's not really a mis-
configuration. But if you think that's a better name despite not
describing what it really does, I'm okay changing it...

>> +static int p2m_pt_change_entry_type_range(struct p2m_domain *p2m,
>> +                                          p2m_type_t ot, p2m_type_t nt,
>> +                                          unsigned long first_gfn,
>> +                                          unsigned long last_gfn)
>> +{
>> +    unsigned long mask = (1 << PAGETABLE_ORDER) - 1;
>> +    unsigned int i;
>> +    int err = 0;
>> +
>> +    ASSERT(hap_enabled(p2m->domain));
>> +
>> +    for ( i = 1; i <= 4; )
>> +    {
>> +        if ( first_gfn & mask )
>> +        {
>> +            unsigned long end_gfn = min(first_gfn | mask, last_gfn);
>> +
>> +            err = p2m_pt_set_recalc_range(p2m, i, first_gfn, end_gfn);
>> +            if ( err || end_gfn >= last_gfn )
>> +                break;
>> +            first_gfn = end_gfn + 1;
>> +        }
>> +        else if ( (last_gfn & mask) != mask )
>> +        {
>> +            unsigned long start_gfn = max(first_gfn, last_gfn & ~mask);
>> +
>> +            err = p2m_pt_set_recalc_range(p2m, i, start_gfn, last_gfn);
>> +            if ( err || start_gfn <= first_gfn )
>> +                break;
>> +            last_gfn = start_gfn - 1;
>> +        }
>> +        else
>> +        {
>> +            ++i;
>> +            mask |= mask << PAGETABLE_ORDER;
>> +        }
>> +    }
>> +
>> +    return err;
>> +}
> 
> This looks like it could be merged with the EPT equivalent.  Or would
> that be too unwieldy?

I considered unifying both this and/or the helpers of it, but decided
that the result would be uglier (too many parameters just to tell one
from the other) than the duplication that results from this approach.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.