[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH v4 07/21] IOMMU/x86: support freeing of pagetables


  • To: Roger Pau Monné <roger.pau@xxxxxxxxxx>
  • From: Jan Beulich <jbeulich@xxxxxxxx>
  • Date: Wed, 4 May 2022 15:07:24 +0200
  • Arc-authentication-results: i=1; mx.microsoft.com 1; spf=pass smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com; dkim=pass header.d=suse.com; arc=none
  • Arc-message-signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=pMgl4Gr/UeHBQ66d/DU6oKCkaWBT+A+HnvvjltgK/R0=; b=gosMQPsiDCukklNzqAjdwHd4x+Mj3lXFdv3bZTCDz/u+NEKdZlh0YIZsX8OKfZcuLwYADD5gyAS/q2A37v7hdrz3Xy6WuICKumoGAUxuyR8UVOx8oUbo4yYqGryB8AuG/DI5tHBM3l72dSgyIvdcpIyHOxqAY1B50PcIJVO6dl0xK1sPvnEJx3FW2Pr0pEXHPx8dM/361wHOPrZm1bG4cxtuXeccFoas2suJ7FO/hfPbE69AVXpSv56ynANBYc3IAzFp6a74MdCbMeMTZzIF9Q6RT1pxZZX1126rDSi67cjhACFhIAP6by8b4EUZWL5wHh//yQlLlMSZiTtli6tPig==
  • Arc-seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=F/Nz2O8BLSfIRWX9dq6xrHxHkPiERNTtWML4mGVJlcBv8FieuOoRqQxTX7mtMaJWcCx28tayuu7TJnIYXimHSMIPfjC7ABXfRE6O08iUxqzbM6cQBqIjTvDbfiFINOlDPTNoDl2yiQT+MTMfXK6sfs9B/K7wlsX44lFempKeU114WG99SvoumbrsXMz2qdOdPbujkfjcUAkzYPlG+FFlhzzQ0P4GykqQd1vtZAMyrGL/SYtrL6sqomtYed4WzBr6jrdub3XPN61VtdQGVYGHL21WjUPPdZ+a+J+44gPPCXFJD45QgZAJKLLeAho1zmVDxcDfYuKSM1xH8AY32yD/7g==
  • Authentication-results: dkim=none (message not signed) header.d=none;dmarc=none action=none header.from=suse.com;
  • Cc: "xen-devel@xxxxxxxxxxxxxxxxxxxx" <xen-devel@xxxxxxxxxxxxxxxxxxxx>, Andrew Cooper <andrew.cooper3@xxxxxxxxxx>, Paul Durrant <paul@xxxxxxx>, Wei Liu <wl@xxxxxxx>
  • Delivery-date: Wed, 04 May 2022 13:07:35 +0000
  • List-id: Xen developer discussion <xen-devel.lists.xenproject.org>

On 03.05.2022 18:20, Roger Pau Monné wrote:
> On Mon, Apr 25, 2022 at 10:35:45AM +0200, Jan Beulich wrote:
>> For vendor specific code to support superpages we need to be able to
>> deal with a superpage mapping replacing an intermediate page table (or
>> hierarchy thereof). Consequently an iommu_alloc_pgtable() counterpart is
>> needed to free individual page tables while a domain is still alive.
>> Since the freeing needs to be deferred until after a suitable IOTLB
>> flush was performed, released page tables get queued for processing by a
>> tasklet.
>>
>> Signed-off-by: Jan Beulich <jbeulich@xxxxxxxx>
>> ---
>> I was considering whether to use a softirq-tasklet instead. This would
>> have the benefit of avoiding extra scheduling operations, but come with
>> the risk of the freeing happening prematurely because of a
>> process_pending_softirqs() somewhere.
> 
> I'm sorry again if I already raised this, I don't seem to find a
> reference.

Earlier on you only suggested "to perform the freeing after the flush".

> What about doing the freeing before resuming the guest execution in
> guest vCPU context?
> 
> We already have a hook like this on HVM in hvm_do_resume() calling
> vpci_process_pending().  I wonder whether we could have a similar hook
> for PV and keep the pages to be freed in the vCPU instead of the pCPU.
> This would have the benefit of being able to context switch the vCPU
> in case the operation takes too long.

I think this might work in general, but would be troublesome when
preparing Dom0 (where we don't run on any of Dom0's vCPU-s, and we
won't ever "exit to guest context" on an idle vCPU). I'm also not
really fancying to use something like

    v = current->domain == d ? current : d->vcpu[0];

(leaving aside that we don't really have d available in
iommu_queue_free_pgtable() and I'd be hesitant to convert it back).
Otoh it might be okay to free page tables right away for domains
which haven't run at all so far. But this would again require
passing struct domain * to iommu_queue_free_pgtable().

Another upside (I think) of the current approach is that all logic
is contained in a single source file (i.e. in particular there's no
new field needed in a per-vCPU structure defined in some header).

> Not that the current approach is wrong, but doing it in the guest
> resume path we could likely prevent guests doing heavy p2m
> modifications from hogging CPU time.

Well, they would still be hogging time, but that time would then be
accounted towards their time slices, yes.

>> @@ -550,6 +551,91 @@ struct page_info *iommu_alloc_pgtable(st
>>      return pg;
>>  }
>>  
>> +/*
>> + * Intermediate page tables which get replaced by large pages may only be
>> + * freed after a suitable IOTLB flush. Hence such pages get queued on a
>> + * per-CPU list, with a per-CPU tasklet processing the list on the 
>> assumption
>> + * that the necessary IOTLB flush will have occurred by the time tasklets 
>> get
>> + * to run. (List and tasklet being per-CPU has the benefit of accesses not
>> + * requiring any locking.)
>> + */
>> +static DEFINE_PER_CPU(struct page_list_head, free_pgt_list);
>> +static DEFINE_PER_CPU(struct tasklet, free_pgt_tasklet);
>> +
>> +static void free_queued_pgtables(void *arg)
>> +{
>> +    struct page_list_head *list = arg;
>> +    struct page_info *pg;
>> +    unsigned int done = 0;
>> +
> 
> With the current logic I think it might be helpful to assert that the
> list is not empty when we get here?
> 
> Given the operation requires a context switch we would like to avoid
> such unless there's indeed pending work to do.

But is that worth adding an assertion and risking to kill a system just
because there's a race somewhere by which we might get here without any
work to do? If you strongly think we want to know about such instances,
how about a WARN_ON_ONCE() (except that we still don't have that
specific construct, it would need to be open-coded for the time being)?

>> +static int cf_check cpu_callback(
>> +    struct notifier_block *nfb, unsigned long action, void *hcpu)
>> +{
>> +    unsigned int cpu = (unsigned long)hcpu;
>> +    struct page_list_head *list = &per_cpu(free_pgt_list, cpu);
>> +    struct tasklet *tasklet = &per_cpu(free_pgt_tasklet, cpu);
>> +
>> +    switch ( action )
>> +    {
>> +    case CPU_DOWN_PREPARE:
>> +        tasklet_kill(tasklet);
>> +        break;
>> +
>> +    case CPU_DEAD:
>> +        page_list_splice(list, &this_cpu(free_pgt_list));
> 
> I think you could check whether list is empty before queuing it?

I could, but this would make the code (slightly) more complicated
for improving something which doesn't occur frequently.

Jan




 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.