[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] Notes on stubdoms and latency on ARM



On Fri, 19 May 2017, Volodymyr Babchuk wrote:
> On 18 May 2017 at 22:00, Stefano Stabellini <sstabellini@xxxxxxxxxx> wrote:
> 
> > Description of the problem: need for a place to run emulators and
> > mediators outside of Xen, with low latency.
> >
> > Explanation of what EL0 apps are. What should be their interface with
> > Xen? Could the interface be the regular hypercall interface? In that
> > case, what's the benefit compared to stubdoms?
> I imagined this as separate syscall interface (with finer policy
> rules). But this can be discussed, of course.

Right, and to be clear, I am not against EL0 apps.


> > The problem with stubdoms is latency and scheduling. It is not
> > deterministic. We could easily improve the null scheduler to introduce
> > some sort of non-preemptive scheduling of stubdoms on the same pcpus of
> > the guest vcpus. It would still require manually pinning vcpus to pcpus.
> I see couple of other problems with stubdoms. For example, we need
> mechanism to load mediator stubdom before dom0.

This can be solved: unrelated to this discussion, I had already created a
project for Outreachy/GSoC to create multiple guests from device tree.

https://wiki.xenproject.org/wiki/Outreach_Program_Projects#Xen_on_ARM:_create_multiple_guests_from_device_tree


> > Then, we could add a sched_op hypercall to let the schedulers know that
> > a stubdom is tied to a specific guest domain.
> What if one stubdom serves multiple domains? This is TEE use case.

It can be done. Stubdoms are typically deployed one per domain but they
are not limited to that model.


> > The other issue with stubdoms is context switch times. Volodymyr showed
> > that minios has much higher context switch times compared to EL0 apps.
> > It is probably due to GIC context switch, that is skipped for EL0 apps.
> > Maybe we could skip GIC context switch for stubdoms too, if we knew that
> > they are not going to use the VGIC. At that point, context switch times
> > should be very similar to EL0 apps.
> So you are suggesting to create something like lightweight stubdom. I
> generally like this idea. But AFAIK, vGIC is used to deliver events
> from hypervisor to stubdom. Do you want to propose another mechanism?

There is no way out: if the stubdom needs events, then we'll have to
expose and context switch the vGIC. If it doesn't, then we can skip the
vGIC. However, we would have a similar problem with EL0 apps: I am
assuming that EL0 apps don't need to handle interrupts, but if they do,
then they might need something like a vGIC.


> Also, this is sounds much like my EL0 PoC :)

Yes :-)


> > ACTIONS:
> > Improve the null scheduler to enable decent stubdoms scheduling on
> > latency sensitive systems.
> I'm not very familiar with XEN schedulers. Looks like null scheduler
> is good for hard RT, but isn't fine for a generic consumer system. How
> do you think: is it possible to make credit2 scheduler to schedule
> stubdoms in the same way?

You can do more than that :-)
You can use credit2 and the null scheduler simultaneously on different
sets of physical cpus using cpupools. For example, you can use the null
scheduler on 2 physical cores and credit2 on the remaining cores.

To better answer your question, yes it can be done with credit2 too,
however it will obviously be more work (the null scheduler is trivial).


> > Investigate ways to improve context switch times on ARM.
> Do you have any tools to profile or trace XEN core? Also, I don't
> think that pure context switch time is the biggest issue. Even now, it
> allows 180 000 switches per second (if I'm not wrong). I think,
> scheduling latency is more important.

I am using the arch timer, manually reading the counter values. I know
it's not ideal but it does the job. I am sure that with a combination of
null scheduler and vcpu pinning the scheduling latencies can extremely
reduced.

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
https://lists.xen.org/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.