[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH 1/5] x86/HVM: wire up multicalls


  • To: Jan Beulich <jbeulich@xxxxxxxx>, "xen-devel@xxxxxxxxxxxxxxxxxxxx" <xen-devel@xxxxxxxxxxxxxxxxxxxx>
  • From: Andrew Cooper <andrew.cooper3@xxxxxxxxxx>
  • Date: Fri, 18 Jun 2021 14:02:26 +0100
  • Arc-authentication-results: i=1; mx.microsoft.com 1; spf=pass smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com; dkim=pass header.d=citrix.com; arc=none
  • Arc-message-signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=ubYbjb3qJxOp3audMyHTIWJqZoVf7Y0wIU88PPTeYLQ=; b=IKHn/Muj2XSiHgf4lF/BQmnGf4csaO9f+KFm7JSQinaAVyAFZQXSoVm8WWDM+Ed9To/vAViBbUhNWAiE/fZpL5sK8m3F3Yq956vjbXbyK5hGDmpcUograCqxAWjRwyopHggJB1QE2O/BqHy020QuJmjeJ8qxgREElOtJgdtyVZENNHFD1QYQyd/nAGh/xS/z0EyzJEFbKVNsJMGYI2rRTm4dqZ/TTmp5/jRfukNnBSkt8PZXeFGdX1E9bSOt3qWOfxL5ERtsiz0/qAO/kJ0AYunTtjg5Exm/8jHuwpXR45DNnM44/gpgcPtD8WCmgaYFqELFbKXhQXsqEvDgtYeOmQ==
  • Arc-seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=lMFtZ+c9WQs2EdYtCqG4WgEXz5BbD9k5Tv9p9sp9tRXMEJjj8zouuY5PeVZMswGE0loXvZBoW+mIuASJvsbyjGG56Czktoqjn0FhlR5YHEam6OOvvI8jrftv6EAczZzCjSQV5A/qcniDxrKkCT5z5W9TA8gi1WTX/VntI+W0KFvabgVIuQvQWPRxN0MQHMyLYsr4CpUapve3M7it+ADwbCZE9MzpRLhb0zkYSCuCeKf+0cXmx2IWn7f+WE/A4L+vKXkoRmSWcid7Ch7OraEnbFwZOwUiDbx8s6s5b5nUUJ8+kAqDpa+WdxL1AGtRnhIDYIysm2FrqgggDspLq0iCdw==
  • Authentication-results: esa1.hc3370-68.iphmx.com; dkim=pass (signature verified) header.i=@citrix.onmicrosoft.com
  • Cc: Wei Liu <wl@xxxxxxx>, Roger Pau Monné <roger.pau@xxxxxxxxxx>, Ian Jackson <iwj@xxxxxxxxxxxxxx>, Juergen Gross <jgross@xxxxxxxx>
  • Delivery-date: Fri, 18 Jun 2021 13:03:03 +0000
  • Ironport-hdrordr: A9a23:AFoXwaFgoK+XWXpBpLqFDJHXdLJyesId70hD6qkvc3Jom52j+P xGws526faVslYssHFJo6HlBEDyewKjyXcT2/hvAV7CZnibhILMFuBfBOTZskbd8kHFh4hgPO JbAtVD4b7LfCpHZKTBkXGF+r8bqbHtms3Y5pa9vgNQpENRGsZdBm9Ce3Wm+yZNNXB77PQCZf +hD4Z81kCdkSN9VLXKOpBJZZmMm/T70LbdJTIWDR8u7weDyRuu9b7BChCdmjMTSSlGz7sO+X XM11WR3NTij9iLjjvnk0PD5ZVfn9XsjvNFGcy3k8AQbhHhkByhaohNU6CL+Bo1vOaswlA3l8 SkmWZgA+1Dr1fqOk2lqxrk3AftlBw07WX59FOeiXz/5eTkWTMTEaN69MdkWyqcz3BlkMB30a pN0W7cnYFQFwn8kCP04MWNfw12l3CzvWEpnYco/j5iuLMlGfhsRLEkjQVo+M9qJlOi1GlnKp gsMCjk3ocTTbvABEqp5lWGqbeXLwEO9hTveDlOhiXa6UkMoJjVp3FojPD3pU1wgq7VfaM0rd gsAp4Y442mcfVmJJ6VJN1xDfdfWVa9Di4lDgqpUB/a/fY8SgPwQtjMke8I2N0=
  • Ironport-sdr: QN/VZVJVBuNgXvBVbGpX0Swulg3CYshVLb3M8U2RM8lO3Hgsi72uv0TG2zclzpz8hqb7guypsB ubLSp3BqGpDNrCAYImp/eTc9X2UnX/pEA5VLV7wmyVD5hkyyJPAix6zg7SxU/WcEm00oV+eVBP ujLAmmp0CI7MnCtIJsLN1irDdNsa397oXIe08F+3OQHQ7aT3LX7DHOMUMGPZ9k85Z5MmRhC5dk sw0s8n+slE5Dn4NCQXU/uMD0JNwxpMDHXqOtJPVlRM9XV2CLfR4fCc5HbVXeRZLjoAl75SRjP+ IT4=
  • List-id: Xen developer discussion <xen-devel.lists.xenproject.org>

On 18/06/2021 11:23, Jan Beulich wrote:
> To be able to use them from, in particular, the tool stack, they need to
> be supported for all guest types. Note that xc_resource_op() already
> does, so would not work without this on PVH Dom0.

I'm not a fan of multicalls as a concept - they're mostly a layering
violation adding substantial complexity - and frankly, working around a
Linux kernel/user ABI error is a terrible reason to make this change.

But I won't object if it happens to be the least terrible option going. 
I accept that there are no good options here.

> @@ -334,6 +336,39 @@ int hvm_hypercall(struct cpu_user_regs *
>      return curr->hcall_preempted ? HVM_HCALL_preempted : HVM_HCALL_completed;
>  }
>  
> +enum mc_disposition hvm_do_multicall_call(struct mc_state *state)
> +{
> +    struct vcpu *curr = current;
> +    hypercall_fn_t *func = NULL;
> +
> +    if ( hvm_guest_x86_mode(curr) == 8 )
> +    {
> +        struct multicall_entry *call = &state->call;
> +
> +        if ( call->op < ARRAY_SIZE(hvm_hypercall_table) )
> +            func = array_access_nospec(hvm_hypercall_table, call->op).native;
> +        if ( func )
> +            call->result = func(call->args[0], call->args[1], call->args[2],
> +                                call->args[3], call->args[4], call->args[5]);
> +        else
> +            call->result = -ENOSYS;
> +    }
> +    else
> +    {
> +        struct compat_multicall_entry *call = &state->compat_call;
> +
> +        if ( call->op < ARRAY_SIZE(hvm_hypercall_table) )
> +            func = array_access_nospec(hvm_hypercall_table, call->op).compat;
> +        if ( func )
> +            call->result = func(call->args[0], call->args[1], call->args[2],
> +                                call->args[3], call->args[4], call->args[5]);
> +        else
> +            call->result = -ENOSYS;
> +    }
> +
> +    return !hvm_get_cpl(curr) ? mc_continue : mc_preempt;

This is ported across from XSA-213, but even for PV guests, it was just
defence in depth IIRC for any cases we hadn't spotted, changing privilege.

There is no pagetable accounting in HVM guests to become confused by a
privilege change, and hvm_get_cpl() isn't totally free.  Any kernel
which puts VCPUOP_initialise in a multicall gets to keep all resulting
pieces.

I think this wants to be just "return mc_continue;"

If so, Begrudingly acked-by: Andrew Cooper <andrew.cooper3@xxxxxxxxxx>

~Andrew




 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.