[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH 3/8] x86/SVM: Add AVIC vmexit handlers



On 30/04/2018 22:23, Natarajan, Janakarajan wrote:
>>> +static int avic_ldr_write(struct vcpu *v, u8 g_phy_id, u32 ldr,
>>> bool valid)
>>> +{
>>> +    struct avic_logical_id_entry *entry, new_entry;
>>> +    u32 dfr = vlapic_read_aligned(vcpu_vlapic(v), APIC_DFR);
>>> +
>>> +    entry = avic_get_logical_id_entry(&v->domain->arch.hvm_domain.svm,
>>> +                                      ldr, (dfr == APIC_DFR_FLAT));
>>> +    if (!entry)
>> if ( !entry )
>>
>>> +        return -EINVAL;
>>> +
>>> +    new_entry = *entry;
>>> +    smp_rmb();
>>> +    new_entry.guest_phy_apic_id = g_phy_id;
>>> +    new_entry.valid = valid;
>>> +    *entry = new_entry;
>>> +    smp_wmb();
>> These barriers don't do what you want.  The pattern you are looking for
>> would require an smp_mb() between setting valid and writing things
>> back.  However, that is overkill - all that matters is that the compiler
>> doesn't generate multiple partial updates.
>>
>> new_entry.raw = ACCESS_ONCE(entry->raw);
>>
>> new_entry.guest_phy_apic_id = g_phy_id;
>> new_entry.valid = valid;
>>
>> ACCESS_ONCE(entry->raw) = new_entry.raw;
>
> Since it was decided to not use
>
> union ... {
>         uint64_t raw;
>         struct avic_logical_table_entry {
>         ....
>         ....
>         };
> };
>
> would smp_mb() make sense here?

Nothing has been decided yet.  I've made an argument for why the
minimalistic approach (as suggested in in the thread hanging off patch
4, which supersedes this) would be correct (and best, IMO) but the
decision as to what code to use is ultimately up to you as the submitter
(subject to it being functionally correct, which is the purpose of review).

In the ACCESS_ONCE() case, all the assignments are strictly dependent on
previous reads, which forces the compiler and pipeline to issue them in
order, so I don't see any reason for an mfence instruction.

~Andrew

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxxx
https://lists.xenproject.org/mailman/listinfo/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.