[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-devel] [PATCH v2 2/3] xen: add hypercall option to temporarily pin a vcpu
On 01/03/16 15:52, George Dunlap wrote: > On 01/03/16 09:02, Juergen Gross wrote: >> Some hardware (e.g. Dell studio 1555 laptops) require SMIs to be >> called on physical cpu 0 only. Linux drivers like dcdbas or i8k try >> to achieve this by pinning the running thread to cpu 0, but in Dom0 >> this is not enough: the vcpu must be pinned to physical cpu 0 via >> Xen, too. >> >> Add a stable hypercall option SCHEDOP_pin_temp to the sched_op >> hypercall to achieve this. It is taking a physical cpu number as >> parameter. If pinning is possible (the calling domain has the >> privilege to make the call and the cpu is available in the domain's >> cpupool) the calling vcpu is pinned to the specified cpu. The old >> cpu affinity is saved. To undo the temporary pinning a cpu -1 is >> specified. This will restore the original cpu affinity for the vcpu. >> >> Signed-off-by: Juergen Gross <jgross@xxxxxxxx> >> --- >> V2: - limit operation to hardware domain as suggested by Jan Beulich >> - some style issues corrected as requested by Jan Beulich >> - use fixed width types in interface as requested by Jan Beulich >> - add compat layer checking as requested by Jan Beulich >> --- >> xen/common/compat/schedule.c | 4 ++ >> xen/common/schedule.c | 92 >> +++++++++++++++++++++++++++++++++++++++++--- >> xen/include/public/sched.h | 17 ++++++++ >> xen/include/xlat.lst | 1 + >> 4 files changed, 109 insertions(+), 5 deletions(-) >> >> diff --git a/xen/common/compat/schedule.c b/xen/common/compat/schedule.c >> index 812c550..73b0f01 100644 >> --- a/xen/common/compat/schedule.c >> +++ b/xen/common/compat/schedule.c >> @@ -10,6 +10,10 @@ >> >> #define do_sched_op compat_sched_op >> >> +#define xen_sched_pin_temp sched_pin_temp >> +CHECK_sched_pin_temp; >> +#undef xen_sched_pin_temp >> + >> #define xen_sched_shutdown sched_shutdown >> CHECK_sched_shutdown; >> #undef xen_sched_shutdown >> diff --git a/xen/common/schedule.c b/xen/common/schedule.c >> index b0d4b18..653f852 100644 >> --- a/xen/common/schedule.c >> +++ b/xen/common/schedule.c >> @@ -271,6 +271,12 @@ int sched_move_domain(struct domain *d, struct cpupool >> *c) >> struct scheduler *old_ops; >> void *old_domdata; >> >> + for_each_vcpu ( d, v ) >> + { >> + if ( v->affinity_broken ) >> + return -EBUSY; >> + } >> + >> domdata = SCHED_OP(c->sched, alloc_domdata, d); >> if ( domdata == NULL ) >> return -ENOMEM; >> @@ -669,6 +675,14 @@ int cpu_disable_scheduler(unsigned int cpu) >> if ( cpumask_empty(&online_affinity) && >> cpumask_test_cpu(cpu, v->cpu_hard_affinity) ) >> { >> + if ( v->affinity_broken ) >> + { >> + /* The vcpu is temporarily pinned, can't move it. */ >> + vcpu_schedule_unlock_irqrestore(lock, flags, v); >> + ret = -EBUSY; >> + break; >> + } > > Does this mean that if the user closes the laptop lid while one of these > drivers has vcpu0 pinned, that Xen will crash (see > xen/arch/x86/smpboot.c:__cpu_disable())? Or is it the OS's job to make > sure that all temporary pins are removed before suspending? > > Also -- have you actually tested the "cpupool move while pinned" > functionality to make sure it actually works? There's a weird bit in > cpupool_unassign_cpu_helper() where after calling > cpu_disable_scheduler(cpu), it unconditionally sets the cpu bit in the > cpupool_free_cpus mask, even if it returns an error. That can't be > right, even for the existing -EAGAIN case, can it? > > I see that you have a loop to retry this call several times in the next > patch; but what if it fails every time -- what state is the system in? > > And, in general, what happens if the device driver gets mixed up and > forgets to unpin the vcpu? Is the only recourse to reboot your host (or > deal with the fact that you can't reconfigure your cpupools)? (I should say, I think this probably is the best solution to this problem; I just want to make sure we think about the error cases carefully.) -George _______________________________________________ Xen-devel mailing list Xen-devel@xxxxxxxxxxxxx http://lists.xen.org/xen-devel
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |