[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [RFC PATCH 21/24] ARM: vITS: handle INVALL command



On Tue, 2016-12-06 at 11:36 -0800, Stefano Stabellini wrote:
> On Tue, 6 Dec 2016, Julien Grall wrote:
>
> > > Another approach is to let the scheduler know that migration is
> > > slower.
> > > In fact this is not a new problem: it can be slow to migrate
> > > interrupts,
> > > even few non-LPIs interrupts, even on x86. I wonder if the Xen
> > > scheduler
> > > has any knowledge of that (CC'ing George and Dario). I guess
> > > that's the
> > > reason why most people run with dom0_vcpus_pin.
> > 
> > I gave a quick look at x86, arch_move_irqs is not implemented. Only
> > PIRQ are
> > migrated when a vCPU is moving to another pCPU.
>
> > In the case of ARM, we directly modify the configuration of the
> > hardware. This
> > adds much more overhead because you have to do an hardware access
> > for every
> > single IRQ.
> 
> George, Dario, any comments on whether this would make sense and how
> to
> do it?
>
I was actually looking into this, but I think I don't know enough of
ARM in general, and about this issue in particular to be useful.

That being said, perhaps you could clarify a bit what you mean with
"let the scheduler know that migration is slower". What you'd expect
the scheduler to do?

Checking the code, as Julien says, on x86 all we do when we move vCPUs
around is calling evtchn_move_pirqs(). In fact, it was right that
function that was called multiple times in schedule.c, and it was you
that (as Julien pointed out already):
1) in 5bd62a757b9 ("xen/arm: physical irq follow virtual irq"), 
   created arch_move_irqs() as something that does something on ARM,
   and as an empty stub in x86.
2) in 14f7e3b8a70 ("xen: introduce sched_move_irqs"), generalized 
   schedule.c code by implementing sched_move_irqs().

So, if I understood correctly what Julien said here "I don't think this
would modify the irq migration work flow. etc.", it looks to me that
the suggested lazy approach could be a good solution (but I'm saying
that lacking the knowledge of what it would actually mean to implement
that).

If you want something inside the scheduler that sort of delays the
wakeup of a domain on the new pCPU until some condition in IRQ handling
code is verified (but, please, confirm whether or not it was this that
you were thinking of), my thoughts, out of the top of my head about
this are:
- in general, I think it should be possible;
- it has to be arch-specific, I think?
- It's easy to avoid the vCPU being woken as a consequence of
  vcpu_wake() being called, e.g., at the end of vcpu_migrate();
- we must be careful about not forgetting/failing to (re)wakeup the 
  vCPU when the condition verifies

Sorry if I can't be more useful than this for now. :-/

Regards,
Dario
-- 
<<This happens because I choose it to happen!>> (Raistlin Majere)
-----------------------------------------------------------------
Dario Faggioli, Ph.D, http://about.me/dario.faggioli
Senior Software Engineer, Citrix Systems R&D Ltd., Cambridge (UK)

Attachment: signature.asc
Description: This is a digitally signed message part

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
https://lists.xen.org/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.