[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH RFC 17/49] xen/sched: move some per-vcpu items to struct sched_item



>>> On 01.04.19 at 07:59, <jgross@xxxxxxxx> wrote:
> On 30/03/2019 10:59, Juergen Gross wrote:
>> On 29/03/2019 22:33, Andrew Cooper wrote:
>>> If at all possible, I'd prefer to see about disentangling the bits which
>>> actually need external use, and putting them in sched.h, and making
>>> sched-if.h properly private to the schedulers.  I actually even started
>>> a cleanup series which moved all of the scheduler infrastructure into
>>> common/sched/, but found a disappointing quantity of sched-if.h being
>>> referenced externally.
>> 
>> I can add something like that to my series if you want. So:
>> 
>> - moving schedule.c, sched_*.c and cpupool.c to common/sched/
>> - move stuff from sched-if.h to sched.h if needed outside of
>>   common/sched/
>> - move sched-if.h to common/sched/
> 
> Questions to especially the scheduler maintainers and "the REST": should
> we move the scheduler stuff to xen/common/sched/ or would /xen/sched/ be
> more appropriate?
> 
> Maybe it would be worthwhile to move e.g. the context switching from
> xen/arch/*/domain.c to xen/sched/context_<arch>.c? I think this code is
> rather scheduler related and moving it to the sched directory might help
> hiding some scheduler internals from other sources, especially with my
> core scheduling series. IMO this would make the xen/sched/ directory the
> preferred one.

FWIW, I don't really mind such a move as long as it won't result in
then having to expose various arch-internals just to make them
usable from xen/sched/context_<arch>.c (or whatever it's going
to be named - the name is a little longish for my taste).

But may I recommend not to do too many things all in one go?

Jan



_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxxx
https://lists.xenproject.org/mailman/listinfo/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.