[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH v3 1/2][4.15] VMX: delay p2m insertion of APIC access page


  • To: Jan Beulich <jbeulich@xxxxxxxx>
  • From: Roger Pau Monné <roger.pau@xxxxxxxxxx>
  • Date: Mon, 22 Feb 2021 13:15:01 +0100
  • Arc-authentication-results: i=1; mx.microsoft.com 1; spf=pass smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com; dkim=pass header.d=citrix.com; arc=none
  • Arc-message-signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=Qa3uRHj4XKIDzMx71uwje1sb6P0VBoR6WHFb8GYPcGU=; b=HiIJGuh0qcdaK2W7nSUykIix/JHwUSNBe9AqafW47UKQTC4j2YkBd7cGY77I0i1mhYuA4HBoWWvLZi0zCGR+q1rwSX0ae4cCBL+aXRYcYcSMx6EYttE/5/3FqqtIbu0vzHIHKBEdwS05eanRAwNcKvDmueKfibPx/JX4B7CMt8Ez6ZJ/eRUrazEWXCcyS60TdHiJMEvrnyXaangy7Iva1uRe9zquZD1K2K7lZQuDNm94C9np4Vh6bvUTnH1ddgSTyJmLLU05ad/k1R10MgsSv+Ukh7qikohIsGAMMZTZ5HF0RkpiZq09ef/stbYHjlnBRc3vOpxiXe0FlvHKx+qiTQ==
  • Arc-seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=TMnVkIcF6qydhK2kimYdPKcekMeeaU7n5YswLs5H6eFBWcFGPMi7q2OTZVWm72gNbm48q8MGnJy83jBKrFKe+vNpTw+n+SqJWRf0RFtidRwQ7NcoklkMbpBShgUokQjT43dRN63WYw889nAuAmW07jmfnfpB9nnu83qgZkBlieeuDR0YO3XXKvGcq69CJgUN//Q5gPx+0ma6u0mOb1NM8WImiJCbG9VGgZxJBEGomlYPZ9BRaZIoUvBlPSif4+eDYnI9YYBN63ZZhiBW4zDOBDfZwQl1IvDn9gizg/GwWDrIKPtG0x1VcFOkJAQSt/LlU4Ql/IoB0e/PkbreoNr+6A==
  • Authentication-results: esa5.hc3370-68.iphmx.com; dkim=pass (signature verified) header.i=@citrix.onmicrosoft.com
  • Cc: "xen-devel@xxxxxxxxxxxxxxxxxxxx" <xen-devel@xxxxxxxxxxxxxxxxxxxx>, "Andrew Cooper" <andrew.cooper3@xxxxxxxxxx>, Wei Liu <wl@xxxxxxx>, Ian Jackson <iwj@xxxxxxxxxxxxxx>, George Dunlap <george.dunlap@xxxxxxxxxx>, Kevin Tian <kevin.tian@xxxxxxxxx>, Jun Nakajima <jun.nakajima@xxxxxxxxx>, Julien Grall <julien@xxxxxxx>
  • Delivery-date: Mon, 22 Feb 2021 12:15:30 +0000
  • Ironport-sdr: 5Na/BLDdQGfXeaOVFk8BXLcvjtdqEGT61z7wx5j33B25YX5WwpwZD5ejKXqtHN8zBT+wD0ouTu PFImEiXm3gpVnvfubbLYToqbydZXqT6Nc5ZSgH97CYF5EEZA5Ckjoglik327M6yrPyhsCOKgze qArrBUSz+ZZbDSeprQGkYeMyha27n3KHBYYIT5LxNod8iasf2c7Wwmb/onU0IjIyjJbukM2SxA HolUu7yyEU4h5ha+oZrA1UhmZr5XWTRRtzQWTkz13t8uQ1uPBinJuor8b+x9flNicjo0h6+kUK wW0=
  • List-id: Xen developer discussion <xen-devel.lists.xenproject.org>

On Mon, Feb 22, 2021 at 11:56:58AM +0100, Jan Beulich wrote:
> Inserting the mapping at domain creation time leads to a memory leak
> when the creation fails later on and the domain uses separate CPU and
> IOMMU page tables - the latter requires intermediate page tables to be
> allocated, but there's no freeing of them at present in this case. Since
> we don't need the p2m insertion to happen this early, avoid the problem
> altogether by deferring it until the last possible point. This comes at
> the price of not being able to handle an error other than by crashing
> the domain.
> 
> Reported-by: Julien Grall <julien@xxxxxxx>
> Signed-off-by: Jan Beulich <jbeulich@xxxxxxxx>

Acked-by: Roger Pau Monné <roger.pau@xxxxxxxxxx>

> ---
> v3: New (split out).
> ---
> Hooking p2m insertion onto arch_domain_creation_finished() isn't very
> nice, but I couldn't find any better hook (nor a good place where to
> introduce a new one). In particular there look to be no hvm_funcs hooks
> being used on a domain-wide basis (except for init/destroy of course).
> I did consider connecting this to the setting of HVM_PARAM_IDENT_PT, but
> considered this no better, the more that the tool stack could be smarter
> and avoid setting that param when not needed.

I'm not specially found of allocating the page in one hook, mapping it
into the p2m in another hook and finally setting up the vmcs fields in
yet another hook.

I would rather prefer to have a single place where for the BSP the
page is allocated and mapped into the p2m, while for APs just the vmcs
fields are set. It's IMO slightly difficult to follow the setup when
it's split into so many different places.

Thanks, Roger.



 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.