[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH v3 5/5] x86/domctl: Implement XEN_DOMCTL_get_cpu_policy



>>> On 05.11.18 at 12:16, <andrew.cooper3@xxxxxxxxxx> wrote:
> --- a/xen/arch/x86/domctl.c
> +++ b/xen/arch/x86/domctl.c
> @@ -1528,6 +1528,38 @@ long arch_do_domctl(
>          recalculate_cpuid_policy(d);
>          break;
>  
> +    case XEN_DOMCTL_get_cpu_policy:
> +        /* Process the CPUID leaves. */
> +        if ( guest_handle_is_null(domctl->u.cpu_policy.cpuid_policy) )
> +            domctl->u.cpu_policy.nr_leaves = CPUID_MAX_SERIALISED_LEAVES;
> +        else if ( (ret = x86_cpuid_copy_to_buffer(
> +                       d->arch.cpuid,
> +                       domctl->u.cpu_policy.cpuid_policy,
> +                       &domctl->u.cpu_policy.nr_leaves)) )
> +            break;
> +
> +        if ( __copy_field_to_guest(u_domctl, domctl,
> +                                   u.cpu_policy.nr_leaves) )
> +        {
> +            ret = -EFAULT;
> +            break;
> +        }
> +
> +        /* Process the MSR entries. */
> +        if ( guest_handle_is_null(domctl->u.cpu_policy.msr_policy) )
> +            domctl->u.cpu_policy.nr_msrs = MSR_MAX_SERIALISED_ENTRIES;
> +        else if ( (ret = x86_msr_copy_to_buffer(
> +                       d->arch.msr,
> +                       domctl->u.cpu_policy.msr_policy,
> +                       &domctl->u.cpu_policy.nr_msrs)) )
> +            break;
> +
> +        if ( __copy_field_to_guest(u_domctl, domctl,
> +                                   u.cpu_policy.nr_msrs)  )
> +            ret = -EFAULT;

Is it really worthwhile having extra code to copy back two fields,
rather than just setting copyback to true? Preferably with this
changed, hypervisor side
Reviewed-by: Jan Beulich <jbeulich@xxxxxxxx>

Jan



_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxxx
https://lists.xenproject.org/mailman/listinfo/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.