[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [xen-unstable test] 162845: regressions - FAIL


  • To: Anthony PERARD <anthony.perard@xxxxxxxxxx>
  • From: Jan Beulich <jbeulich@xxxxxxxx>
  • Date: Wed, 16 Jun 2021 17:34:20 +0200
  • Arc-authentication-results: i=1; mx.microsoft.com 1; spf=pass smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com; dkim=pass header.d=suse.com; arc=none
  • Arc-message-signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=vaAwwp88N1zeKfF5XWWmXKzUDLfyl3gzUR91ONM+jNo=; b=Iqyoe6y3XA5LA7X5zsfctlRQqCAH6RY7HrXo7RGVSEm9Bwi8zoR8lPhM6GErS1UiTzYOEsNZyFTHvmtknOyVM6HO8zex8DGtM37RdCm9hKMDSrLVK5Qdpqf7IoyUqQttu5zV1t2EU7hu9tgNexIw/vPuV8sUv1YMx6nZiMhItb51Z+0h8j9ldnEmV/zlEPmEYc0cjgJ891FiI/VrVr1wXKzoqKNtnXLeUdDZNGwZG4/EoJr/GWR1qIo7Dx8zJK3CZJ8GjfN8rsc/20Gw47UupJIvvGFaBPrX9eRL3DINse1WV+8Lt1BQuNqqrVJ9d87lJYt5ZoVSJDuNLFT939UBgw==
  • Arc-seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=YwXKCXdEz9Le/YFS5D/Yfx8kMKWO2AP4Q3TLV27TSC1KzbWIGnJbMgytfhNvDVh2XGZu7fNHd/uEYZQv8BY5lF7k/wFagNlh8UM8hBXdvehTaxYQ+t5JEtl8tnQhkQEXGZMggUs13k4j92CDaEVShwv0/aDEvkZ60V1jDRAxXpgEaK+0iYR4BYGQk2Q8amv55MU0nFvioSxbQzv0X4PaU46sDXhNcfK74CEqNtcs6a9dKrkDc3RZygxKpkuuWBIFyfrY8Vf25DwGbKhNu3/t+4fGC1s869oNZ3smZ5zrrV0CXrFwGQw3fxXil0F3ESqjpL2p+N8qJqVC6WaiC38WPw==
  • Authentication-results: xenproject.org; dkim=none (message not signed) header.d=none;xenproject.org; dmarc=none action=none header.from=suse.com;
  • Cc: Ian Jackson <iwj@xxxxxxxxxxxxxx>, xen-devel@xxxxxxxxxxxxxxxxxxxx, osstest service owner <osstest-admin@xxxxxxxxxxxxxx>
  • Delivery-date: Wed, 16 Jun 2021 15:34:30 +0000
  • List-id: Xen developer discussion <xen-devel.lists.xenproject.org>

On 16.06.2021 17:12, Anthony PERARD wrote:
> On Wed, Jun 16, 2021 at 04:49:33PM +0200, Jan Beulich wrote:
>> I don't think it should. But I now notice I should have looked at the
>> logs of these tests:
>>
>> xc: info: Saving domain 2, type x86 HVM
>> xc: error: Unable to obtain the guest p2m size (1 = Operation not 
>> permitted): Internal error
>> xc: error: Save failed (1 = Operation not permitted): Internal error
>>
>> which looks suspiciously similar to the issue Jürgen's d21121685fac
>> ("tools/libs/guest: fix save and restore of pv domains after 32-bit
>> de-support") took care of, just that here we're dealing with a HVM
>> guest. I'll have to go inspect what exactly the library is doing there,
>> and hence where in Xen the -EPERM may be coming from all of the
>> sudden (and only for OVMF).
>>
>> Of course the behavior you describe above may play into this, since
>> aiui this might lead to an excessively large p2m (depending what
>> exactly you mean with "as high as possible").
> 
> The maximum physical address size as reported by cpuid 0x80000008
> (or 1<<48 if above that) minus 1 page, or 1<<36 - 1 page.

So this is very likely the problem, and not just for a 32-bit tool
stack right now. With ...

long do_memory_op(xc_interface *xch, int cmd, void *arg, size_t len)
{
    DECLARE_HYPERCALL_BOUNCE(arg, len, XC_HYPERCALL_BUFFER_BOUNCE_BOTH);
    long ret = -1;
    ...
    ret = xencall2(xch->xcall, __HYPERVISOR_memory_op,
                   cmd, HYPERCALL_BUFFER_AS_ARG(arg));

... I'm disappointed to find:

int xencall0(xencall_handle *xcall, unsigned int op);
int xencall1(xencall_handle *xcall, unsigned int op,
             uint64_t arg1);
int xencall2(xencall_handle *xcall, unsigned int op,
             uint64_t arg1, uint64_t arg2);
...

I'm sure we had the problem of a truncated memory-op hypercall
result already in the past, so there definitely was a known problem
that got re-introduced. Or wait, no - I've found that commit
(a27f1fb69d13), and it didn't really have any effect afaict:
Adjusting do_memory_op()'s return type wasn't sufficient, when
do_xen_hypercall() was returning only int. Now on to figuring a
not overly intrusive way of addressing this.

Jan




 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.