[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [RFC PATCH 21/24] ARM: vITS: handle INVALL command



On Tue, 6 Dec 2016, Dario Faggioli wrote:
> On Tue, 2016-12-06 at 11:36 -0800, Stefano Stabellini wrote:
> > On Tue, 6 Dec 2016, Julien Grall wrote:
> > > 
> > > > Another approach is to let the scheduler know that migration is
> > > > slower.
> > > > In fact this is not a new problem: it can be slow to migrate
> > > > interrupts,
> > > > even few non-LPIs interrupts, even on x86. I wonder if the Xen
> > > > scheduler
> > > > has any knowledge of that (CC'ing George and Dario). I guess
> > > > that's the
> > > > reason why most people run with dom0_vcpus_pin.
> > > 
> > > I gave a quick look at x86, arch_move_irqs is not implemented. Only
> > > PIRQ are
> > > migrated when a vCPU is moving to another pCPU.
> > > 
> > > In the case of ARM, we directly modify the configuration of the
> > > hardware. This
> > > adds much more overhead because you have to do an hardware access
> > > for every
> > > single IRQ.
> > 
> > George, Dario, any comments on whether this would make sense and how
> > to
> > do it?
> >
> I was actually looking into this, but I think I don't know enough of
> ARM in general, and about this issue in particular to be useful.
> 
> That being said, perhaps you could clarify a bit what you mean with
> "let the scheduler know that migration is slower". What you'd expect
> the scheduler to do?
> 
> Checking the code, as Julien says, on x86 all we do when we move vCPUs
> around is calling evtchn_move_pirqs(). In fact, it was right that
> function that was called multiple times in schedule.c, and it was you
> that (as Julien pointed out already):
> 1) in 5bd62a757b9 ("xen/arm: physical irq follow virtual irq"), 
>    created arch_move_irqs() as something that does something on ARM,
>    and as an empty stub in x86.
> 2) in 14f7e3b8a70 ("xen: introduce sched_move_irqs"), generalized 
>    schedule.c code by implementing sched_move_irqs().
> 
> So, if I understood correctly what Julien said here "I don't think this
> would modify the irq migration work flow. etc.", it looks to me that
> the suggested lazy approach could be a good solution (but I'm saying
> that lacking the knowledge of what it would actually mean to implement
> that).
> 
> If you want something inside the scheduler that sort of delays the
> wakeup of a domain on the new pCPU until some condition in IRQ handling
> code is verified (but, please, confirm whether or not it was this that
> you were thinking of), my thoughts, out of the top of my head about
> this are:
> - in general, I think it should be possible;
> - it has to be arch-specific, I think?
> - It's easy to avoid the vCPU being woken as a consequence of
>   vcpu_wake() being called, e.g., at the end of vcpu_migrate();
> - we must be careful about not forgetting/failing to (re)wakeup the 
>   vCPU when the condition verifies
> 
> Sorry if I can't be more useful than this for now. :-/

We don't need scheduler support to implement interrupt migration. The
question was much simpler than that: moving a vCPU with interrupts
assigned to it is slower than moving a vCPU without interrupts assigned
to it. You could say that the slowness is directly proportional do the
number of interrupts assigned to the vCPU. Does the scheduler know that?
Or blindly moves vCPUs around? Also see below.



> On Mon, 2016-12-05 at 11:51 -0800, Stefano Stabellini wrote:
> > Another approach is to let the scheduler know that migration is
> > slower.
> > In fact this is not a new problem: it can be slow to migrate
> > interrupts,
> > even few non-LPIs interrupts, even on x86. I wonder if the Xen
> > scheduler
> > has any knowledge of that (CC'ing George and Dario). I guess that's
> > the
> > reason why most people run with dom0_vcpus_pin.
> >
> Oh, and about this last sentence.
> 
> I may indeed be lacking knowledge/understanding, but if you think this
> is a valid use case for dom0_vcpus_pin, I'd indeed be interested in
> knowing why.
> 
> In fact, that configuration has always looked rather awkward to me, and
> I think we should start thinking stopping providing the option at all
> (or changing/extending its behavior).
> 
> So, if you think you need it, please spell that out, and let's see if
> there are better ways to achieve the same. :-)

That's right, I think dom0_vcpus_pin is a good work-around for the lack
of scheduler knowledge about interrupts. If the scheduler knew that
moving vCPU0 from pCPU0 to pCPU1 is far more expensive than moving vCPU3
from pCPU3 to pCPU1 then it would make better decision and we wouldn't
need dom0_vcpus_pin.
_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
https://lists.xen.org/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.