[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-devel] [PATCH v14 01/10] x86: add generic resource (e.g. MSR) access hypercall
>>> On 02.09.14 at 12:04, <chao.p.peng@xxxxxxxxxxxxxxx> wrote: > On Tue, Sep 02, 2014 at 09:52:21AM +0100, Jan Beulich wrote: >> >>> On 02.09.14 at 10:33, <chao.p.peng@xxxxxxxxxxxxxxx> wrote: >> > On Fri, Aug 29, 2014 at 04:40:52PM +0100, Jan Beulich wrote: >> >> >>> On 28.08.14 at 09:43, <chao.p.peng@xxxxxxxxxxxxxxx> wrote: >> >> > + case XENPF_resource_op: >> >> > + { >> >> > + struct xen_resource_access ra; >> >> > + struct xenpf_resource_op *rsc_op = &op->u.resource_op; >> >> > + unsigned int i, j = 0, cpu = smp_processor_id(); >> >> > + >> >> > + for ( i = 0; i < rsc_op->nr; i++ ) >> >> > + { >> >> > + if ( copy_from_guest_offset(&ra.data, rsc_op->data, i, 1) ) >> >> > + { >> >> > + ret = -EFAULT; >> >> > + break; >> >> > + } >> >> > + >> >> > + if ( ra.data.cpu == cpu ) >> >> > + resource_access_one(&ra); >> >> > + else if ( cpu_online(ra.data.cpu) ) >> >> > + on_selected_cpus(cpumask_of(ra.data.cpu), >> >> > + resource_access_one, &ra, 1); >> >> > + else >> >> > + { >> >> > + ret = -ENODEV; >> >> > + break; >> >> > + } >> >> > + >> >> > + if ( ra.ret ) >> >> > + { >> >> > + ret = ra.ret; >> >> > + break; >> >> > + } >> >> > + >> >> > + if ( copy_to_guest_offset(rsc_op->data, i, &ra.data, 1) ) >> >> > + { >> >> > + ret = -EFAULT; >> >> > + break; >> >> > + } >> >> > + >> >> > + /* Find the start point that requires no preemption */ >> >> > + if ( ra.data.flag && j == 0 ) >> >> > + j = i; >> >> > + /* Set j = 0 when walking out of the non-preemption area */ >> >> > + if ( ra.data.flag == 0 ) >> >> > + j = 0; >> >> > + if ( hypercall_preempt_check() ) >> >> > + { >> >> > + ret = hypercall_create_continuation( >> >> > + __HYPERVISOR_platform_op, "ih", >> >> > + ra.data.flag ? j : i, u_xenpf_op); >> >> >> >> Which means everything starting from j will be re-executed >> >> another time when continuing. That creates three problems: You >> >> can't guarantee forwards progress, you may do something >> >> having side effects more than once, and you break the operation >> >> in a place that was requested to not be preemptible. >> > I saw the problem here. Actually the j or i here will not be passed to >> > next iteration successfully. Possibly a 'count' param is needed to be >> > added to do_platform_op() for this purpose. >> >> You can't add any parameter to do_platform_op(), and I also >> don't see why you'd need to. >> > Let me shed more light on this. > > The 'i' we want to pass to next iteration is saved in guest_cpu_user_regs in > hypercall_create_continuation(), which will be used as parameter for > the re-execution of do_platform_op(). Take do_hvm_op() as example, > > 6199 rc = hypercall_create_continuation(__HYPERVISOR_hvm_op,"lh", > 6120 op | start_iter, arg); > > It reuses several bits in existed param 'op' to pass the start_iter and then > the start_iter is obtained in the second call of do_hvm_op(): > > 5478 unsigned long start_iter = op & ~HVMOP_op_mask; > > For our case we only save the 'i' into guest_cpu_user_regs but we lack of > way to accept it. > > However, the method used in do_hvm_op() does not work for us as > do_platform_op() > only has one point type param which we can't safely reuse any bit. Correct, which means you can't just copy that mechanism. There are two possible solutions I see here: 1) Make an exception and store the continuation information in the interface structure (but say so clearly in the public header - the lack of such information is why we can't do this elsewhere). 2) Fiddle with the multicall mechanism to allow indicating the desire to skip preemption checking for the current iteration, and do the batching desired here via the multicall layer. Of course there are more heavyweight solutions like introducing a brand new hypercall. Out of the two options above I'd personally prefer the second. Jan _______________________________________________ Xen-devel mailing list Xen-devel@xxxxxxxxxxxxx http://lists.xen.org/xen-devel
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |