[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH 2/4] tools/tests: Unit test for paging mempool size


  • To: Jan Beulich <jbeulich@xxxxxxxx>
  • From: Andrew Cooper <Andrew.Cooper3@xxxxxxxxxx>
  • Date: Thu, 17 Nov 2022 16:27:03 +0000
  • Accept-language: en-GB, en-US
  • Arc-authentication-results: i=1; mx.microsoft.com 1; spf=pass smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com; dkim=pass header.d=citrix.com; arc=none
  • Arc-message-signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=LKhgxxwXJyatr88HgRHYL0lQJr/bi+Lp3jg4QI2ewKw=; b=LgzWOa4yJwtJ/ufbSqXnuQyIm7Tg7jlGyhuHt7E+Luc5o4ymfFrv8TE8vJZ1WYSxZxhcJvgl44KllpnDhYNtCa5b6YHVa+DHH4HYUY0KwFT7T8gJBPrEh9VbrHZ3UUvS3fpP9OlOHhm2WY8F1WY7SOnXsUB5Ju1dC2dPplu/+js1xdlN+pCFIsEf4kdK61VJrcnchwDahHsm0Ddc8zk7wRCTx4JvbpPZYDL3N3mxcbAh1MmaSM42bAr6TDp6czq2LP6BXfunUMrCJ5Q32L4ToW0YAFvDdFOCU7q3QItiSK+YulHSMEAEShrs694l6SgscpncYdVjWk/1YlMWt18flg==
  • Arc-seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=KknGO1sQA0zWS8jPlW/WpxKCchYs90/DR30i12YwFD/liUqZebtSMloLfeR0/CHcRA/8rizOzppmQ9/QbLnB1aRvRyt+v325tYwzVIgheI3pOh1SLeOdIZhWDBsn2/SPSDa35CZXx3odGd5fUtkkol2Nnyi3k0fDr6agXYch6ybabBK8qF/7rO0PEPcl+H3rBK7Lk8eTQCBL677cHaIlCpABVSpDNtmVkXau/BhmpHy9hg0z/UBPF6Qoqo1GIL3iZPBzirN8cSyZxwCyp10dx8/61WK5/NKI9FQEa0S2lYlZmWPs8YwFS/XtoVVFTpgjyaicC8AFQ1tKLWYYbkuoBA==
  • Authentication-results: dkim=none (message not signed) header.d=none;dmarc=none action=none header.from=citrix.com;
  • Cc: Roger Pau Monne <roger.pau@xxxxxxxxxx>, Wei Liu <wl@xxxxxxx>, Stefano Stabellini <sstabellini@xxxxxxxxxx>, Julien Grall <julien@xxxxxxx>, Volodymyr Babchuk <Volodymyr_Babchuk@xxxxxxxx>, Bertrand Marquis <bertrand.marquis@xxxxxxx>, Henry Wang <Henry.Wang@xxxxxxx>, Anthony Perard <anthony.perard@xxxxxxxxxx>, Xen-devel <xen-devel@xxxxxxxxxxxxxxxxxxxx>
  • Delivery-date: Thu, 17 Nov 2022 16:27:17 +0000
  • Ironport-data: A9a23:ojIJDaNioPZZ2ubvrR05lsFynXyQoLVcMsEvi/4bfWQNrUp21zJTy WoZCmmHbKuDZTT0KdkkaIS0/U0OsMDczIdhQAto+SlhQUwRpJueD7x1DKtS0wC6dZSfER09v 63yTvGacajYm1eF/k/F3oDJ9CU6jufQA+KmU4YoAwgpLSd8UiAtlBl/rOAwh49skLCRDhiE/ Nj/uKUzAnf8s9JPGj9SuvzrRC9H5qyo4mpB5wNmP5ingXeF/5UrJMNHTU2OByOQrrl8RoaSW +vFxbelyWLVlz9F5gSNy+uTnuUiG9Y+DCDW4pZkc/HKbitq/0Te5p0TJvsEAXq7vh3S9zxHJ HehgrTrIeshFvWkdO3wyHC0GQkmVUFN0OevzXRSLaV/ZqAJGpfh66wGMa04AWEX0vpqOFBz+ fcxEQ4MXkyFq/qrmqKCYMA506zPLOGzVG8ekldJ6GiASN0BGNXESaiM4sJE1jAtgMwIBezZe 8cSdTtoalLHfgFLPVAUTpk5mY9EhFGmK2Ee9A3T+PRxujeNpOBy+OGF3N79U9qGX8hK2G2fo XrL5T/RCRAGLt2PjzGC9xpAg8efwHulB95IT9VU8NYwhUfJ+ysaKScQclmxuaO20lS9QoJmf hl8Fi0G6PJaGFaQZtv3UgC8oXWElgUBQNcWGOo/gCmSzoLE7gDfAXILJhZac8AvvsIyQT0s1 3eKksnvCDgpt6eaIVqf67OVoDWaKSUTa2gYakcscwwB5NXypZApuTjGRN1jDa2dg8X8HHf7x DXihCIznakJhMgHkaCy50nagimEr4LMCAUy423/fm+j9BI/W4ejaKSh812d5vFFRK6JQ1/Es HUalsy26OEVEYrLhCGLWP8KHryi+7CCKjK0vLJ0N5wo9jDo8Hn6e4lVuWh6PB0wbZhCfiL1a kjOvw8X/IVUIHahca5wZcS2Ftguyq/jU9/iU5g4c+ZzX3S4TyfflAkGWKJa9zmFfJQE+U3nB aqmTA==
  • Ironport-hdrordr: A9a23:Z8kjoKpu//yuAUwvXHyB7vUaV5sDLNV00zEX/kB9WHVpm5Oj+v xGzc5w6farsl0ssSkb6Ku90KnpewK+yXbsibNhcItKLzOWwldAS7sSobcKogeQUREWk9Qw6U 4OSdkYNDSdNzlHZIPBkXGF+rUbsZa6GcKT9IHjJh5WJGkEBZ2IrT0JczpzeXcGJjWucKBJcK Z0kfA3wgZIF052Uu2LQl0+G8TTrdzCk5zrJTYAGh4c8QGLyR+49bLgFBCc/xEGFxdC260r/2 TpmxHwovzLiYD79jbsk0voq7hGktrozdVOQOSKl8guMz3pziq4eYh7XLWGnTYt5MWi8kwjnt Xgqwope+5z93TSVGeopgaF4Xiv7B8er1vZjXOIi3rqpsL0ABggDdBauI5fehzFr2I9odBVys twriGknqsSKSmFsDX25tDOWR0vvFGzu2AenekaiGEaeZcCaYVWsZcU8CpuYdU99WPBmcUa+d tVfYbhDcVtABWnhrfizzBSKemXLzAO99G9MxA/U4KuomNrdTtCvjYlLYQk7ws9HdQGOtl5Dq 3/Q9pVfPsldL5oUYttQOgGWse5EWrLXFbFN3+TO03uEOUdN2vKsIOf2sR92AiGQu1+8HIJou W2bHpI8WopP07+A8yH25NGthjLXWWmRDzojsVT/YJwtLHwTKfidXTrciFkr+Kw5/EERsHLUf e6P5xbR/flMGv1AI5MmwnzQYNbJ3USWNAc/tw7R1WNqMTWLZCCjJ2STN/DYL72VTo0UGL2BX UOGDD1OcVb90iuHmT1hRDAMkmdDnAXPagAZZQy09Jju7TlbLc8wzT9oW7Jlv2jOHlFrrE8el d4Lffujr67zFPGj1r10w==
  • List-id: Xen developer discussion <xen-devel.lists.xenproject.org>
  • Thread-index: AQHY+iEU7bRDKRjfo06qF2T8yB0IDa5C7Z4AgABg+AA=
  • Thread-topic: [PATCH 2/4] tools/tests: Unit test for paging mempool size

On 17/11/2022 10:39, Jan Beulich wrote:
> On 17.11.2022 02:08, Andrew Cooper wrote:
>> Exercise some basic functionality of the new
>> xc_{get,set}_paging_mempool_size() hypercalls.
>>
>> This passes on x86, but fails currently on ARM.  ARM will be fixed up in
>> future patches.
>>
>> This is part of XSA-409 / CVE-2022-33747.
>>
>> Signed-off-by: Andrew Cooper <andrew.cooper3@xxxxxxxxxx>
>> Release-acked-by: Henry Wang <Henry.Wang@xxxxxxx>
> Acked-by: Jan Beulich <jbeulich@xxxxxxxx>

Thanks.

> (if this counts anything, since as it stands the new stuff all falls
> under tool stack maintainership)

I do intend to give it its own section in due course, but this falls
very much into "The Rest" at the moment.

>> --- /dev/null
>> +++ b/tools/tests/paging-mempool/test-paging-mempool.c
>> @@ -0,0 +1,181 @@
>> +#include <err.h>
>> +#include <errno.h>
>> +#include <inttypes.h>
>> +#include <stdio.h>
>> +#include <string.h>
>> +#include <sys/mman.h>
>> +
>> +#include <xenctrl.h>
>> +#include <xenforeignmemory.h>
>> +#include <xengnttab.h>
>> +#include <xen-tools/libs.h>
>> +
>> +static unsigned int nr_failures;
>> +#define fail(fmt, ...)                          \
>> +({                                              \
>> +    nr_failures++;                              \
>> +    (void)printf(fmt, ##__VA_ARGS__);           \
>> +})
>> +
>> +static xc_interface *xch;
>> +static uint32_t domid;
>> +
>> +static struct xen_domctl_createdomain create = {
>> +    .flags = XEN_DOMCTL_CDF_hvm | XEN_DOMCTL_CDF_hap,
> I understand that it is accepted that this test will thus fail when run
> on HAP-incapable hardware (including when run with Xen itself running on
> top of another hypervisor not surfacing HAP capabilities)? Oh, I notice
> you're actually translating EINVAL and EOPNOTSUPP failures into "skip".
> That'll probably do, albeit personally I consider skipping when EINVAL
> (which we use all over the place) as a overly relaxed.

Checking capabilities needs to happen anyway to get PV and HVM Shadow
support working.

But this will do for now.

>> +static void run_tests(void)
>> +{
>> +    xen_pfn_t physmap[] = { 0 };
> I have to admit that I'm uncertain whether Arm (or other architectures
> that Xen is being planned to be ported to) have constraints which may
> cause populating of GFN 0 to fail.

Mechanically, no.  x86 PV is an entirely arbitrary mapping, while all
HVM modes (x86 and ARM) are just filling in a guest physical -> "some
RAM" mapping in a pagetable structure.

Logically, I hope that CDF_directmap on ARM checks for an alias to a
real RAM block, but this capability isn't even exposed to the toolstack.


And now I've looked at this, ewwww.  We've got:

d->options which holds config->flags, DOMCTL_CDF_*
d->cdf which holds the local 'flags' parameter, CFD_*
d->is_privileged which holds 'flags & CDF_privileged' and was never
cleaned up when d->cdf was introduced.

This is unnecessarily confusing to follow.  Yet more for the cleanup pile.

>> +    uint64_t size_bytes, old_size_bytes;
>> +    int rc;
>> +
>> +    printf("Test default mempool size\n");
>> +
>> +    rc = xc_get_paging_mempool_size(xch, domid, &size_bytes);
>> +    if ( rc )
>> +        return fail("  Fail: get mempool size: %d - %s\n",
>> +                    errno, strerror(errno));
>> +
>> +    printf("mempool size %"PRIu64" bytes (%"PRIu64"kB, %"PRIu64"MB)\n",
>> +           size_bytes, size_bytes >> 10, size_bytes >> 20);
>> +
>> +
>> +    /*
>> +     * Check that the domain has the expected default allocation size.  This
>> +     * will fail if the logic in Xen is altered without an equivelent
> Nit: equivalent

Fixed.

>
>> +     * adjustment here.
>> +     */
>> +    if ( size_bytes != default_mempool_size_bytes )
>> +        return fail("  Fail: size %"PRIu64" != expected size %"PRIu64"\n",
>> +                    size_bytes, default_mempool_size_bytes);
>> +
>> +
>> +    printf("Test that allocate doesn't alter pool size\n");
>> +
>> +    /*
>> +     * Populate the domain with some RAM.  This will cause more of the 
>> mempool
>> +     * to be used.
>> +     */
>> +    old_size_bytes = size_bytes;
>> +
>> +    rc = xc_domain_setmaxmem(xch, domid, -1);
>> +    if ( rc )
>> +        return fail("  Fail: setmaxmem: : %d - %s\n",
>> +                    errno, strerror(errno));
>> +
>> +    rc = xc_domain_populate_physmap_exact(xch, domid, 1, 0, 0, physmap);
>> +    if ( rc )
>> +        return fail("  Fail: populate physmap: %d - %s\n",
>> +                    errno, strerror(errno));
>> +
>> +    /*
>> +     * Re-get the p2m size.  Should not have changed as a consequence of
>> +     * populate physmap.
>> +     */
>> +    rc = xc_get_paging_mempool_size(xch, domid, &size_bytes);
>> +    if ( rc )
>> +        return fail("  Fail: get mempool size: %d - %s\n",
>> +                    errno, strerror(errno));
>> +
>> +    if ( old_size_bytes != size_bytes )
>> +        return fail("  Fail: mempool size changed %"PRIu64" => %"PRIu64"\n",
>> +                    old_size_bytes, size_bytes);
>> +
>> +
>> +
>> +    printf("Test bad set size\n");
>> +
>> +    /*
>> +     * Check that setting a non-page size results in failure.
>> +     */
>> +    rc = xc_set_paging_mempool_size(xch, domid, size_bytes + 1);
>> +    if ( rc != -1 || errno != EINVAL )
>> +        return fail("  Fail: Bad set size: expected -1/EINVAL, got %d/%d - 
>> %s\n",
>> +                    rc, errno, strerror(errno));
>> +
>> +
>> +    printf("Test very large set size\n");
> Maybe drop "very", as 64M isn't all that much (and would, in particular,
> not expose any 32-bit truncation issues)?

Hmm yeah.  That was rather stale.

I've switched to "Test set continuation\n" seeing as that's the purpose
of the test.  64M is an internal detail which ought to be small enough
to work reliably everywhere, but large enough to cause a continuation.

~Andrew

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.