[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH V2 1/2] x86/altp2m: Add hypercall to set a range of sve bits




On 12.11.2019 13:54, Jan Beulich wrote:
> On 06.11.2019 16:35, Alexandru Stefan ISAILA wrote:
>> @@ -4681,7 +4682,7 @@ static int do_altp2m_op(
>>           break;
>>   
>>       case HVMOP_altp2m_set_suppress_ve:
>> -        if ( a.u.suppress_ve.pad1 || a.u.suppress_ve.pad2 )
>> +        if ( a.u.suppress_ve.pad1 )
> 
> Just because the field changes its name doesn't mean you can
> drop the check. You even add a new field not used (yet) by
> this sub-function, which then also would need checking here.

I will revert the change and check the new field.

> 
>> @@ -4693,8 +4694,23 @@ static int do_altp2m_op(
>>           }
>>           break;
>>   
>> +    case HVMOP_altp2m_set_suppress_ve_multi:
>> +        if ( a.u.suppress_ve.pad1 || !a.u.suppress_ve.nr )
> 
> A count of zero typically is taken as a no-op, not an error.

I will return -EPERM for !nr.

> 
>> +            rc = -EINVAL;
>> +        else
>> +        {
>> +            rc = p2m_set_suppress_ve_multi(d, &a.u.suppress_ve);
>> +
>> +            if ( rc == -ERESTART )
>> +                if ( __copy_field_to_guest(guest_handle_cast(arg,
>> +                                           xen_hvm_altp2m_op_t),
>> +                                           &a, u.suppress_ve.opaque) )
>> +                    rc = -EFAULT;
> 
> If the operation is best effort, _some_ indication of failure should
> still be handed back to the caller. Whether that's through the opaque
> field or by some other means is secondary. If not via that field
> (which would make the outer of the two if()-s disappear), please fold
> the if()-s.

This can be solved by having a int error_list that will get 
"copy_to_guest_offset()" at the end.

> 
>> --- a/xen/arch/x86/mm/p2m.c
>> +++ b/xen/arch/x86/mm/p2m.c
>> @@ -3054,6 +3054,64 @@ out:
>>       return rc;
>>   }
>>   
>> +/*
>> + * Set/clear the #VE suppress bit for multiple pages.  Only available on 
>> VMX.
>> + */
>> +int p2m_set_suppress_ve_multi(struct domain *d,
>> +                              struct xen_hvm_altp2m_suppress_ve* sve)
> 
> Misplaced *.

I've missed that, I'll have it the right way.

> 
>> +{
>> +    struct p2m_domain *host_p2m = p2m_get_hostp2m(d);
>> +    struct p2m_domain *ap2m = NULL;
>> +    struct p2m_domain *p2m;
>> +    uint64_t start = sve->opaque ?: sve->gfn;
> 
> According to this start (and hence ->opaque) are GFNs.
> 
>> +    int rc = 0;
>> +
>> +    if ( sve->view > 0 )
>> +    {
>> +        if ( sve->view >= MAX_ALTP2M ||
>> +             d->arch.altp2m_eptp[sve->view] == mfn_x(INVALID_MFN) )
>> +            return -EINVAL;
>> +
>> +        p2m = ap2m = d->arch.altp2m_p2m[sve->view];
>> +    }
>> +    else
>> +        p2m = host_p2m;
>> +
>> +    p2m_lock(host_p2m);
>> +
>> +    if ( ap2m )
>> +        p2m_lock(ap2m);
>> +
>> +
>> +    while ( start < sve->nr )
> 
> According to this, start is an index. Which of the two do you
> mean?

Start is a GFN.

> 
>> --- a/xen/include/public/hvm/hvm_op.h
>> +++ b/xen/include/public/hvm/hvm_op.h
>> @@ -42,8 +42,9 @@ struct xen_hvm_altp2m_suppress_ve {
>>       uint16_t view;
>>       uint8_t suppress_ve; /* Boolean type. */
>>       uint8_t pad1;
>> -    uint32_t pad2;
>> +    uint32_t nr;
>>       uint64_t gfn;
>> +    uint64_t opaque;
>>   };
> 
> How is this addition of a field going to work compatibly with old
> and new callers on old and new hypervisors? Recall in particular
> that these operations are (almost?) all potentially usable by the
> guest itself.
> 

For this HVMOP_ALTP2M_INTERFACE_VERSION shout be increased. I will leave 
it to Tamas to decide if we will need a different structure for 
xen_hvm_altp2m_suppress_ve_multi to keep the compatibility.

Regards,
Alex
_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxxx
https://lists.xenproject.org/mailman/listinfo/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.