[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH 6/8] x86/EPT: force re???evaluation of memory type as necessary



>>> On 27.03.14 at 14:08, <tim@xxxxxxx> wrote:
> At 15:36 +0000 on 26 Mar (1395844584), Jan Beulich wrote:
>> +bool_t ept_handle_misconfig(uint64_t gpa)
>> +{
>> +    struct vcpu *curr = current;
>> +    struct p2m_domain *p2m = p2m_get_hostp2m(curr->domain);
>> +    struct ept_data *ept = &p2m->ept;
>> +    unsigned int level = ept_get_wl(ept);
>> +    unsigned long gfn = PFN_DOWN(gpa);
>> +    unsigned long mfn = ept_get_asr(ept);
>> +    ept_entry_t *epte;
>> +    bool_t okay;
>> +
>> +    if ( !mfn )
>> +        return 0;
>> +
>> +    p2m_lock(p2m);
>> +
>> +    for ( okay = curr->arch.hvm_vmx.ept_spurious_misconfig; ; --level) {
> 
> I'd prefer a 'while (1)' with the --level at the bottom of the loop
> beside where mfn gets set to the next level's mfn.  If not that, can
> you at least move the init of 'okay' out to its own line?

Moving it onto its own line is okay with me (albeit I think having it in the
for() is expressing more clearly that this is the loop start state).

>> +        ept_entry_t e;
>> +        unsigned int i;
>> +
>> +        epte = map_domain_page(mfn);
>> +        i = (gfn >> (level * EPT_TABLE_ORDER)) & (EPT_PAGETABLE_ENTRIES - 
>> 1);
>> +        e = atomic_read_ept_entry(&epte[i]);
>> +
>> +        if ( level == 0 || is_epte_superpage(&e) )
>> +        {
>> +            uint8_t ipat = 0;
>> +            struct vcpu *v;
>> +
>> +            if ( e.emt != MTRR_NUM_TYPES )
>> +                break;
>> +
>> +            if ( level == 0 )
>> +            {
>> +                for ( gfn -= i, i = 0; i < EPT_PAGETABLE_ENTRIES; ++i )
>> +                {
>> +                    e = atomic_read_ept_entry(&epte[i]);
>> +                    if ( e.emt == MTRR_NUM_TYPES )
>> +                        e.emt = 0;
>> +                    if ( !is_epte_valid(&e) || !is_epte_present(&e) )
>> +                        continue;
>> +                    e.emt = epte_get_entry_emt(p2m->domain, gfn + i,
>> +                                               _mfn(e.mfn), &ipat,
>> +                                               e.sa_p2mt == 
>> p2m_mmio_direct);
>> +                    e.ipat = ipat;
>> +                    atomic_write_ept_entry(&epte[i], e);
>> +                }
>> +            }
>> +            else
>> +            {
>> +                e.emt = epte_get_entry_emt(p2m->domain, gfn, _mfn(e.mfn),
>> +                                           &ipat,
>> +                                           e.sa_p2mt == p2m_mmio_direct);
>> +                e.ipat = ipat;
>> +                atomic_write_ept_entry(&epte[i], e);
>> +            }
>> +
>> +            for_each_vcpu ( curr->domain, v )
>> +                v->arch.hvm_vmx.ept_spurious_misconfig = 1;
>> +            okay = 1;
>> +            break;
>> +        }
>> +
>> +        if ( e.emt == MTRR_NUM_TYPES )
>> +        {
>> +            ASSERT(is_epte_present(&e));
>> +            e.emt = 0;
>> +            atomic_write_ept_entry(&epte[i], e);
>> +            unmap_domain_page(epte);
>> +
>> +            ept_invalidate_emt(_mfn(e.mfn));
>> +            okay = 1;
> 
> Do you need to also set ept_spurious_misconfig flags here?  At the
> moment you set them when a present entry gets reset, but what about a
> walk that hits a not-present entry (e.g. mmio_dm)?  There, the first
> CPU to take a fault will clear up the intermediate nodes, but not
> make any chaneg at level 0 or set the ept_spurious_misconfig flags.

Actually I think I need to swap the order of operations instead:
first invalidate the next level, then clear .emt. Then the current
model of needing to set the flag only when making an entry valid
will be correct/consistent afaict.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.