[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [ARM] Native application design and discussion (I hope)



On Wed, 12 Apr 2017, Dario Faggioli wrote:
> On Tue, 2017-04-11 at 13:32 -0700, Stefano Stabellini wrote:
> > On Fri, 7 Apr 2017, Stefano Stabellini wrote:
> > > 
> > > This is the most difficult problem that we need to solve as part of
> > > this
> > > work. It is difficult to have the right answer at the beginning,
> > > before
> > > seeing any code. If the app_container/app_thread approach causes
> > > too
> > > much duplication of work, the alternative would be to fix/improve
> > > stubdoms (minios) until they match what we need. Specifically,
> > > these
> > > would be the requirements:
> > > 
> >
> IMO, this stubdom way, is really really really interesting! :-)
> 
> > > 1) Determinism: a stubdom servicing a given guest needs to be
> > > scheduled
> > >    immediately after the guest vcpu traps into Xen. It needs to
> > >    deterministic.
> >
> Something like this is in my plan since long time. Being able to /
> having help for making it happen would be *great*!
> 
> So, if I'm the scheduler, can you tell  me exactly when a vcpu blocks
> waiting for a service from the vcpu of an app/stubdom (as opposed to,
> going to sleep, waiting for other, unrelated, event, etc), and which
> one?
> 
> If yes... That'd be a good start.

Yes, I think so.


> > >  The stubdom vcpu has to be scheduled on the same pcpu.
> > >    This is probably the most important missing thing at the moment.
> > > 
> That's interesting --similar to what I had in mind, even-- but needs
> thinking.
> 
> E.g., if the stubdom/app is multi-vcpu, which of its vcpu would you
> schedule? And how can we be sure that what will run on that vcpu of the
> stubdom is _exactly_ the process that will deal with the request the
> "real gust" is waiting on?
> 
> TBH, this is much more of an issue if we think of doing something like
> this for driver domain too, while in the stubdom case it indeed
> shouldn't be impossible, but still...
> 
> (And stubdoms, especially minios ones, are the ones I know less, so
> bear with me a bit.)

We would have one app per emulator. Each app would register an MMIO
range or instruction set to emulate. On a guest trap, Xen figures out
which app it needs to run.

With the app model, we would run the app on the same physical cpu where
the guest vcpu trapped, always starting from the same entry point of the
app. You could run as many app instances concurrently as the number of
guest vcpus on different pcpus.

There are no stubdom processes, only a single entry point and a single
address space.


> > > 2) Accounting: memory and cpu time of a stubdom should be accounted
> > >    agaist the domain it is servicing. Otherwise it's not fair.
> > > 
> Absolutely.
> 
> > > 3) Visibility: stub domains and vcpus should be marked differently
> > > from other
> > >    vcpus as not to confuse the user. Otherwise "xl list" becomes
> > >    confusing.
> > > 
> Well, may seem unrelated, but will you schedule the subdom _only_ in
> this kind of "donated time slots" way?

Yes


> > > 1) and 2) are particularly important. If we had them, we would not
> > > need
> > > el0 apps. I believe stubdoms would be as fast as el0 apps too.
> > 
> > CC'ing George and Dario. I was speaking with George about this topic,
> > I'll let him explain his view as scheduler maintainer, but he
> > suggested
> > to avoid scheduler modifications (all schedulers would need to be
> > taught to handle this) and extend struct vcpu for el0 apps instead.
> > 
> Yeah, thanks Stefano. I'm back today after being sick for a couple of
> days, so I need to catch up with this thread, and I will.
> 
> In general, I like the idea of enhancing stubdoms for this, and I'll
> happily participate in design and development of that.

That would be great!
_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
https://lists.xen.org/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.