[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH v15 11/21] x86/VPMU: Interface for setting PMU mode and flags



>>> On 17.11.14 at 00:07, <boris.ostrovsky@xxxxxxxxxx> wrote:
> @@ -278,3 +290,139 @@ void vpmu_dump(struct vcpu *v)
>          vpmu->arch_vpmu_ops->arch_vpmu_dump(v);
>  }
>  
> +static s_time_t vpmu_start_ctxt_switch;

Blank line here please.

> +static long vpmu_sched_checkin(void *arg)
> +{
> +    int cpu = cpumask_next(smp_processor_id(), &cpu_online_map);

unsigned int.

> +    int ret;
> +
> +    /* Mode change shouldn't take more than a few (say, 5) seconds. */
> +    if ( NOW() > vpmu_start_ctxt_switch + SECONDS(5) )
> +    {
> +        ret = -EBUSY;
> +        goto fail;
> +    }

So what does this guard against? Holding xenpmu_mode_lock for
5 seconds is not a problem, plus I don't think you actually need a
lock at all. Just using a global, atomically updated flag ought to be
sufficient (the way you use the lock is really nothing else afaict).

> +
> +    if ( cpu < nr_cpu_ids )
> +    {
> +        ret = continue_hypercall_on_cpu(cpu, vpmu_sched_checkin, arg);
> +        if ( ret )
> +            goto fail;
> +    }
> +    else
> +        vpmu_start_ctxt_switch = 0;
> +
> +    return 0;
> +
> + fail:
> +    vpmu_mode = (uint64_t)arg;

Even if we don't support x86-32 anymore, please avoid giving bad
examples: Casts between pointers and integers should always use
(unsigned) long on the integer side.

> +    vpmu_start_ctxt_switch = 0;
> +    return ret;
> +}
> +
> +static int vpmu_force_context_switch(uint64_t old_mode)

Same comment as above regarding the type.

> +{
> +    int ret;
> +
> +    vpmu_start_ctxt_switch = NOW();
> +
> +    ret = continue_hypercall_on_cpu(cpumask_first(&cpu_online_map),
> +                                    vpmu_sched_checkin, (void *)old_mode);
> +    if ( ret )
> +        vpmu_start_ctxt_switch = 0;
> +
> +    return ret;
> +}

I don't think it is guaranteed (independent of implementation details
of continue_hypercall_on_cpu()) that this way you went through the
context switch path once on each CPU if
cpumask_first(&cpu_online_map) == smp_processor_id() here. In
particular it looks like there being a problem calling vcpu_sleep_sync()
in the tasklet handler when v == current (which would be the case
if you started on the "correct" CPU and the tasklet softirq got
handled before the scheduler one). I think you instead want to use
cpumask_cycle() here and above, and finish the whole operation
once you're back on the CPU you started on (the single-pCPU case
may need some extra consideration).

I realize that you simply follow what microcode_update() does, but
I think we should rather correct that one than copy the latent issue
it has elsewhere.

> +long do_xenpmu_op(int op, XEN_GUEST_HANDLE_PARAM(xen_pmu_params_t) arg)
> +{
> +    int ret;
> +    struct xen_pmu_params pmu_params;
> +
> +    ret = xsm_pmu_op(XSM_OTHER, current->domain, op);
> +    if ( ret )
> +        return ret;
> +
> +    switch ( op )
> +    {
> +    case XENPMU_mode_set:
> +    {
> +        uint64_t old_mode;

Irrespective of the earlier comments I don't see why this isn't just
unsigned int (or typeof(vpmu_mode)).

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.