[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [RFC PATCH v1 2/6] sched: track time spent in hypervisor tasks



On Fri, 2020-09-25 at 20:21 +0000, Volodymyr Babchuk wrote:
> Hi Dario,
> 
Hi! :-)

> Dario Faggioli writes:
> > And what about the cases where schedule() does return?
> 
> Can it return on x86? I want to test this case, but how force it?
> Null
> scheduler, perhaps?
> 
> > Are these also fine because they're handled within __do_softirq()
> > (i.e., without actually going back to do_softirq() and hence never
> > calling end_hyp_task() for a second time)?
> 
> I afraid, that there will be a bug. schedule() calls end_hyp_task(),
> and
> if it will eventually return from __do_softirq() to do_softirq(),
> end_hyp_task() will be called twice.
>
Yeah, exactly. That's why I was asking whether you had verified that we
actually never get to this. Either because we context switch or because
we stay inside __do_schedule() and never go back to do_schedule().

I was, in fact, referring to all the various cases of handling primary
and secondary scheduling request, when core-scheduling is enabled.

> > > I have put bunch of ASSERTs to ensure that vcpu_begin_hyp_task()
> > > or
> > > vcpu_end_hyp_task() are not called twice and that
> > > vcpu_end_hyp_task()
> > > is
> > > called after vcpu_begin_hyp_task(). Those asserts are not
> > > failing, so
> > > I
> > > assume that I did all this in the right way :)
> > > 
> > Yeah, good to know. :-)
> > 
> > Are you doing these tests with both core-scheduling disabled and
> > enabled?
> 
> Good question. On x86 I am running Xen in QEMU. With -smp=2 it sees
> two
> CPUs:
> 
> (XEN) Brought up 2 CPUs
> (XEN) Scheduling granularity: cpu, 1 CPU per sched-resource
> 
> You are right, I need to try other variants of scheduling
> granularity.
> 
> Do you by any chance know how to emulate more complex setup in QEMU?
>
Like enabling a virtual topology, on top of which you could test core
(or socket) scheduling? If yes, indeed you can do that in QEMU:

https://www.qemu.org/docs/master/qemu-doc.html

-smp [cpus=]n[,cores=cores][,threads=threads][,dies=dies]
     [,sockets=sockets][,maxcpus=maxcpus]

Simulate an SMP system with n CPUs. On the PC target, up to 255 CPUs
are supported. On Sparc32 target, Linux limits the number of usable
CPUs to 4. For the PC target, the number of cores per die, the number
of threads per cores, the number of dies per packages and the total
number of sockets can be specified. Missing values will be computed. If
any on the three values is given, the total number of CPUs n can be
omitted. maxcpus specifies the maximum number of hotpluggable CPUs.

Once you have an SMT virtual topology, you can boot Xen inside, with an
higher scheduling granularity.

A (rather big!) example would be:

-smp 224,sockets=4,cores=28,threads=2

You can even define a virtual NUMA topology, if you want.

And you can pin the vCPUs to the physical CPUs of the hosts, in such a
way that the virtual topology is mapped to the physical one. This is
good for performance but also increase a little bit the accuracy of
testing.

> Also, what is the preferred way to test/debug Xen on x86?
> 
I test on real hardware, at least most of the times, if this is what
you're asking.

Checking if the code is "functionally correct" is ok-ish if done in a
VM first. But then, especially for scheduling related things, where
timing plays a rather significant role, I personally prefer to test on
actual hardware sooner rather than later.

Regards
-- 
Dario Faggioli, Ph.D
http://about.me/dario.faggioli
Virtualization Software Engineer
SUSE Labs, SUSE https://www.suse.com/
-------------------------------------------------------------------
<<This happens because _I_ choose it to happen!>> (Raistlin Majere)

Attachment: signature.asc
Description: This is a digitally signed message part


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.