[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH v4 07/21] IOMMU/x86: support freeing of pagetables


  • To: Jan Beulich <jbeulich@xxxxxxxx>
  • From: Roger Pau Monné <roger.pau@xxxxxxxxxx>
  • Date: Wed, 4 May 2022 17:06:37 +0200
  • Arc-authentication-results: i=1; mx.microsoft.com 1; spf=pass smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com; dkim=pass header.d=citrix.com; arc=none
  • Arc-message-signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=dNCr8JtWNmzzkI4xTIFBx6uWfiaIOvJ4Yifm6zPjdcc=; b=AnGK9l8YO7yeTJ9IwMPTRofWs13H57NfwK2A+qT+zdLqNYl1TvkBxSvcpZun5itM2d3xpWkHhSq6BayjvuKZUEATv5S3L/lzrz0m0/sTtaO/icXW1g6osIWr9xL4KRqOc6DlgJuYphieYO+4+vB3PpzeIym6Bul2AR88dq+cmGnJ6WQxDWQE94iSKIx3oCTRIUsP4j7BHp5NETvU02htvey7ihPzizpyJ0+8NAyEjqmADBhijmDj5VV3pQh2ofdFTYT0oWAVDZS+rIfQca/1/LTV3xMPlte3kjMicXKUFBZ7ZYIABsUXOke6hs8a0beuWckVbtUy/Dxn+oQEcceaBA==
  • Arc-seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=Hzvs74yV/ag/Ej/V4Jxc2nIxfug9HdrYkM8y2HdgspyI8zpoEk4sPc8YquLBju4xIG0dHfJAOF99qDHDbRn6OLHor5PjStrDuRn6elwFyQSrjN8MwfX/F4PTHZMQwFbP6JyAhGPXb+2trjfRJlLjPjsxwevOsU9PYs9MIPq/tAvGcd+xMexiYNBisu94gNUh40f4hYI8bRWTVCnouCySTSO511RbBOQ3sGUiR86wQVUroteYyijONt49XcmOTbQkgzgBfyGBAGB+9S1oDmnjiA5/wM5byEnuc/+8Z6CSlKlDHyPRDvOlxay08JjLkg25u4VPeLthDsp4fzonu0S8gg==
  • Authentication-results: dkim=none (message not signed) header.d=none;dmarc=none action=none header.from=citrix.com;
  • Cc: "xen-devel@xxxxxxxxxxxxxxxxxxxx" <xen-devel@xxxxxxxxxxxxxxxxxxxx>, Andrew Cooper <andrew.cooper3@xxxxxxxxxx>, Paul Durrant <paul@xxxxxxx>, Wei Liu <wl@xxxxxxx>
  • Delivery-date: Wed, 04 May 2022 15:07:20 +0000
  • Ironport-data: A9a23:sZelU6z1fV98Wo2dQQ96t+dMxyrEfRIJ4+MujC+fZmUNrF6WrkUOx zYaWz2DPaqCY2LyKd1ybImx8x4P75CDzYdjQVY4qSAxQypGp/SeCIXCJC8cHc8zwu4v7q5Dx 59DAjUVBJlsFhcwnj/0bv656yMUOZigHtIQMsadUsxKbVIiGX5JZS5LwbZj2NY12YThWWthh PupyyHhEA79s9JLGjp8B5Kr8HuDa9yr5Vv0FnRnDRx6lAe2e0s9VfrzFonoR5fMeaFGH/bSe gr25OrRElU1XfsaIojNfr7TKiXmS1NJVOSEoiI+t6OK2nCuqsGuu0qS2TV1hUp/0l20c95NJ NplsaCyUFs3IarwiPk3dQZ+IjBgELd7weqSSZS/mZT7I0zuVVLJmq8rKX5seIoS96BwHH1E8 uEeJHYVdBefiumqwbW9DO5xmsAkK8qtN4Qa0p1i5WiBUbB6HtadHeOWube03x9p7ixKNezZa McDLyJmcTzLYgFVO0dRA5U79AutrianLmMC+APPzUYxy07+7B5v4p3iDPPUe4ysQeFTkVSYl m2TqgwVBTlfbrRz0wGt8Hihm+vOliPTQ58JGfuz8fsCqE2ewCkfBQMbUXO/oOKlkQiuVtRHM UsW9yEy668o+ySDVtDgWzWorXjCuQQTM/JPF8Uq5QfLzbDbizt1HUABRz9FLdk57sk/QGVw0 kfTx4+1QztyrLeSVHSRsK+Oqi+/MjQUKmlEYjIYSQwC4J/op4RbYg/zc+uP2ZWd1rXdcQwcC RjTxMTir93/VfI26pg=
  • Ironport-hdrordr: A9a23:KoG7PKPZ0ZvQ8MBcT0j155DYdb4zR+YMi2TDiHoddfUFSKalfp 6V98jztSWatN/eYgBEpTmlAtj5fZq6z+8P3WBxB8baYOCCggeVxe5ZjbcKrweQeBEWs9Qtr5 uIEJIOd+EYb2IK6voSiTPQe7hA/DDEytHPuQ639QYQcegAUdAF0+4WMHf4LqUgLzM2eKbRWa DsrvZvln6FQzA6f867Dn4KU6zqoMDKrovvZVojCwQ84AeDoDu04PqieiLolCs2Yndq+/MP4G LFmwv26uGKtOy68AbV0yv2445NkNXs59NfDIini9QTKB/rlgG0Db4REIGqjXQQmqWC+VwqmN 7Dr1MJONly0WrYeiWPrR7ky2DboUETA9OL8y7qvVLT5ejCAB4qActIgoxUNjHD7VA7gd162K VXm0qEqpt+F3r77WrAzumNcysvulu/oHIkn+JWpWdYS5EiZLhYqpFa1F9JEa0HADnx5OkcYa ZT5fnnlbZrmG6hHjPkVjEF+q3vYp1zJGbLfqE6gL3V79AM90oJinfxx6Qk7wM9HdwGOt15Dt //Q9VVfYF1P7ErhJ1GdZc8qOuMexrwqEH3QSuvyWqOLtB0B1v977jK3Z4S2MaGPLQ18bpaou W1bLofjx9+R37T
  • List-id: Xen developer discussion <xen-devel.lists.xenproject.org>

On Wed, May 04, 2022 at 03:07:24PM +0200, Jan Beulich wrote:
> On 03.05.2022 18:20, Roger Pau Monné wrote:
> > On Mon, Apr 25, 2022 at 10:35:45AM +0200, Jan Beulich wrote:
> >> For vendor specific code to support superpages we need to be able to
> >> deal with a superpage mapping replacing an intermediate page table (or
> >> hierarchy thereof). Consequently an iommu_alloc_pgtable() counterpart is
> >> needed to free individual page tables while a domain is still alive.
> >> Since the freeing needs to be deferred until after a suitable IOTLB
> >> flush was performed, released page tables get queued for processing by a
> >> tasklet.
> >>
> >> Signed-off-by: Jan Beulich <jbeulich@xxxxxxxx>
> >> ---
> >> I was considering whether to use a softirq-tasklet instead. This would
> >> have the benefit of avoiding extra scheduling operations, but come with
> >> the risk of the freeing happening prematurely because of a
> >> process_pending_softirqs() somewhere.
> > 
> > I'm sorry again if I already raised this, I don't seem to find a
> > reference.
> 
> Earlier on you only suggested "to perform the freeing after the flush".
> 
> > What about doing the freeing before resuming the guest execution in
> > guest vCPU context?
> > 
> > We already have a hook like this on HVM in hvm_do_resume() calling
> > vpci_process_pending().  I wonder whether we could have a similar hook
> > for PV and keep the pages to be freed in the vCPU instead of the pCPU.
> > This would have the benefit of being able to context switch the vCPU
> > in case the operation takes too long.
> 
> I think this might work in general, but would be troublesome when
> preparing Dom0 (where we don't run on any of Dom0's vCPU-s, and we
> won't ever "exit to guest context" on an idle vCPU). I'm also not
> really fancying to use something like
> 
>     v = current->domain == d ? current : d->vcpu[0];

I guess a problematic case would also be hypercalls executed in a
domain context triggering the freeing of a different domain iommu page
table pages.  As then the freeing would be accounted to the current
domain instead of the owner of the pages.

dom0 doesn't seem that problematic, any freeing triggered on a system
domain context could be performed in place (with
process_pending_softirqs() calls to ensure no watchdog triggering).

> (leaving aside that we don't really have d available in
> iommu_queue_free_pgtable() and I'd be hesitant to convert it back).
> Otoh it might be okay to free page tables right away for domains
> which haven't run at all so far.

Could be, but then we would have to make hypercalls that can trigger
those paths preemptible I would think.

> But this would again require
> passing struct domain * to iommu_queue_free_pgtable().

Hm, I guess we could use container_of with the domain_iommu parameter
to obtain a pointer to the domain struct.

> Another upside (I think) of the current approach is that all logic
> is contained in a single source file (i.e. in particular there's no
> new field needed in a per-vCPU structure defined in some header).

Right, I do agree to that.  I'm mostly worried about the resource
starvation aspect.  I guess freeing the pages replaced by a 1G super
page entry is still fine, bigger could be a problem.

> > Not that the current approach is wrong, but doing it in the guest
> > resume path we could likely prevent guests doing heavy p2m
> > modifications from hogging CPU time.
> 
> Well, they would still be hogging time, but that time would then be
> accounted towards their time slices, yes.
> 
> >> @@ -550,6 +551,91 @@ struct page_info *iommu_alloc_pgtable(st
> >>      return pg;
> >>  }
> >>  
> >> +/*
> >> + * Intermediate page tables which get replaced by large pages may only be
> >> + * freed after a suitable IOTLB flush. Hence such pages get queued on a
> >> + * per-CPU list, with a per-CPU tasklet processing the list on the 
> >> assumption
> >> + * that the necessary IOTLB flush will have occurred by the time tasklets 
> >> get
> >> + * to run. (List and tasklet being per-CPU has the benefit of accesses not
> >> + * requiring any locking.)
> >> + */
> >> +static DEFINE_PER_CPU(struct page_list_head, free_pgt_list);
> >> +static DEFINE_PER_CPU(struct tasklet, free_pgt_tasklet);
> >> +
> >> +static void free_queued_pgtables(void *arg)
> >> +{
> >> +    struct page_list_head *list = arg;
> >> +    struct page_info *pg;
> >> +    unsigned int done = 0;
> >> +
> > 
> > With the current logic I think it might be helpful to assert that the
> > list is not empty when we get here?
> > 
> > Given the operation requires a context switch we would like to avoid
> > such unless there's indeed pending work to do.
> 
> But is that worth adding an assertion and risking to kill a system just
> because there's a race somewhere by which we might get here without any
> work to do? If you strongly think we want to know about such instances,
> how about a WARN_ON_ONCE() (except that we still don't have that
> specific construct, it would need to be open-coded for the time being)?

Well, I was recommending an assert because I think it's fine to kill a
debug system in order to catch those outliers. On production builds we
should obviously not crash.

> >> +static int cf_check cpu_callback(
> >> +    struct notifier_block *nfb, unsigned long action, void *hcpu)
> >> +{
> >> +    unsigned int cpu = (unsigned long)hcpu;
> >> +    struct page_list_head *list = &per_cpu(free_pgt_list, cpu);
> >> +    struct tasklet *tasklet = &per_cpu(free_pgt_tasklet, cpu);
> >> +
> >> +    switch ( action )
> >> +    {
> >> +    case CPU_DOWN_PREPARE:
> >> +        tasklet_kill(tasklet);
> >> +        break;
> >> +
> >> +    case CPU_DEAD:
> >> +        page_list_splice(list, &this_cpu(free_pgt_list));
> > 
> > I think you could check whether list is empty before queuing it?
> 
> I could, but this would make the code (slightly) more complicated
> for improving something which doesn't occur frequently.

It's just a:

if ( list_empty(list) )
    break;

at the start of the CPU_DEAD case AFAICT.  As you say this notifier is
not to be called frequently, so not a big deal (also I don't think the
addition makes the code more complicated).

Now that I look at the code again, I think there's a
tasklet_schedule() missing in the CPU_DOWN_FAILED case if there are
entries pending on the list list?

Thanks, Roger.



 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.