[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH] x86/vvmx: Fix nested virt on VMCS-Shadow capable hardware


  • To: Andrew Cooper <andrew.cooper3@xxxxxxxxxx>
  • From: Jan Beulich <JBeulich@xxxxxxxx>
  • Date: Mon, 5 Aug 2019 12:52:34 +0000
  • Accept-language: en-US
  • Arc-authentication-results: i=1; mx.microsoft.com 1;spf=pass smtp.mailfrom=suse.com;dmarc=pass action=none header.from=suse.com;dkim=pass header.d=suse.com;arc=none
  • Arc-message-signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=6cqDBFUgVywrY8uCSsG08X7qDeo4uR/CJRYBimX35WQ=; b=VngqkY9IKwJZJ8iFaG9HGzqXh/onuyPUnsjXdjqnqeJCLwCdVsI/x4XJBDShDxyKqGnCg0Z1VlUTKlhfELd1YTeagKyApDRVpocc3rfF4soqyd6pV8gb+dyHeXJzm/KhD7TACgwQ8LXqBiiUGHtfE6ygN/6LXe17hpF4tDH+jNxxp+y5Fb8PgO1SmmBqhOftWh1Cqtw5grEn1srlL6F0wEXHbMgHLB4MCcq0opSAxNamLpFdcrbgir39vq5e2q/ZlbKjakP5PbPVXVNIZeKl69vKWHQrm8+bZF8eflWEJH+ukmxZtzM7n5RRXHtx4/X1pzagGJ+kCIng+rjszey7XQ==
  • Arc-seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=E8EttGX0WUte3BYKobo1eNQ29Z3Ga8copZ3Oj8/0Goghy0JGiWqPIOpjEcOyOX7/9ewcnRZwhtXShu7m/Nb98V+wzLO4qMLXNxOP2QXkC4tmubDxMC9DdiuMZG7Aw2Uw2OxvRcfUI7VsPhLmNE3w3FBbGfltEnAARJYDtqjofqyos0WivVmg39Bnh9lazm/YePZ4mEHBHnaZU6MIiO2IjfJbCmoegzBJZO4uEPr2E565FbA7vvJxQM8/n+AvjsxBPJHF4HMviOStsBYqJ8Pb+QFBMmm6/p/SthQxztJvJGlHd4u2ph0rjuj8o6VbG9aXte7JYcRjg/f/+xWwqhfhBQ==
  • Authentication-results: spf=none (sender IP is ) smtp.mailfrom=JBeulich@xxxxxxxx;
  • Cc: Xen-devel <xen-devel@xxxxxxxxxxxxxxxxxxxx>, Kevin Tian <kevin.tian@xxxxxxxxx>, Wei Liu <wl@xxxxxxx>, Jun Nakajima <jun.nakajima@xxxxxxxxx>, Roger Pau Monné <roger.pau@xxxxxxxxxx>
  • Delivery-date: Mon, 05 Aug 2019 12:56:46 +0000
  • List-id: Xen developer discussion <xen-devel.lists.xenproject.org>
  • Thread-index: AQHVRuU3MpytGLjiWki4k1cIRnKVnabsi3qA
  • Thread-topic: [Xen-devel] [PATCH] x86/vvmx: Fix nested virt on VMCS-Shadow capable hardware

On 30.07.2019 16:42, Andrew Cooper wrote:
> c/s e9986b0dd "x86/vvmx: Simplify per-CPU memory allocations" had the wrong
> indirection on its pointer check in nvmx_cpu_up_prepare(), causing the
> VMCS-shadowing buffer never be allocated.  Fix it.
> 
> This in turn results in a massive quantity of logspam, as every virtual
> vmentry/exit hits both gdprintk()s in the *_bulk() functions.

The "in turn" here applies to the original bug (which gets fixed here)
aiui, i.e. there isn't any log spam with the fix in place anymore, is
there? If so, ...

> Switch these to using printk_once().  The size of the buffer is chosen at
> compile time, so complaining about it repeatedly is of no benefit.

... I'm not sure I'd agree with this move: Why would it be of interest
only the first time that we (would have) overrun the buffer? After all
it's not only the compile time choice of buffer size that matters here,
but also the runtime aspect of what value "n" has got passed into the
functions. If this is on the assumption that we'd want to know merely
of the fact, not how often it occurs, then I'd think this ought to
remain a debugging printk().

> Finally, drop the runtime NULL pointer checks.  It is not terribly appropriate
> to be repeatedly checking infrastructure which is set up from start-of-day,
> and in this case, actually hid the above bug.

I don't see how the repeated checking would have hidden any bug: Due
to the lack of the extra indirection the pointer would have remained
NULL, and hence the log message would have appeared (as also
mentioned above) _until_ you had fixed the indirection mistake. (This
isn't to mean I'm against dropping the check, I'd just like to
understand the why.)

> @@ -922,11 +922,10 @@ static void vvmcs_to_shadow_bulk(struct vcpu *v, 
> unsigned int n,
>       if ( !cpu_has_vmx_vmcs_shadowing )
>           goto fallback;
>   
> -    if ( !value || n > VMCS_BUF_SIZE )
> +    if ( n > VMCS_BUF_SIZE )
>       {
> -        gdprintk(XENLOG_DEBUG, "vmcs sync fall back to non-bulk mode, "
> -                 "buffer: %p, buffer size: %d, fields number: %d.\n",
> -                 value, VMCS_BUF_SIZE, n);
> +        printk_once(XENLOG_ERR "%pv VMCS sync too many fields %u\n",
> +                    v, n);
>           goto fallback;
>       }
>   
> @@ -962,11 +961,10 @@ static void shadow_to_vvmcs_bulk(struct vcpu *v, 
> unsigned int n,
>       if ( !cpu_has_vmx_vmcs_shadowing )
>           goto fallback;
>   
> -    if ( !value || n > VMCS_BUF_SIZE )
> +    if ( n > VMCS_BUF_SIZE )
>       {
> -        gdprintk(XENLOG_DEBUG, "vmcs sync fall back to non-bulk mode, "
> -                 "buffer: %p, buffer size: %d, fields number: %d.\n",
> -                 value, VMCS_BUF_SIZE, n);
> +        printk_once(XENLOG_ERR "%pv VMCS sync too many fields %u\n",
> +                    v, n);

Would you mind taking the opportunity and also disambiguate the two
log messages, so that from observing one it is clear which instance
it was that got triggered?

Jan
_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxxx
https://lists.xenproject.org/mailman/listinfo/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.