[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH v3 13/16] xen/arm: Add support for GIC v3



On Mon, 2014-05-05 at 17:38 +0530, Vijay Kilari wrote:
> >> +/* Wait for completion of a distributor change */
> >> +static void gicv3_do_wait_for_rwp(void __iomem *base)
> >> +{
> >> +    u32 val;
> >> +    u32 count = 1000000;
> >> +
> >> +    do {
> >> +        val = readl_relaxed(base + GICD_CTLR);
> >> +        if ( !(val & GICD_CTLR_RWP) )
> >> +           break;
> >> +        cpu_relax();
> >> +        udelay(1);
> >
> > Ick. Is there no event when rwp changes, so we could do wfe here?
> >
> > Could we at least use NOW() and MILLISECS() to construct a delay of a
> > known length?
> 
>    Do you mean, to use timer handler?. I thought this is simple enough

No, just
        deadline = NOW() + MILLISECS(somenumber);
        do { 
                val = ...
                if ( )
                        break
                if ( NOW() > deadline )
                        /* Timeout!
                cpu_relax(), delay etc.
        }


> [...]
> >
> >> +    } while ( count-- );
> >> +
> >> +    if ( !count )
> >> +        dprintk(XENLOG_WARNING, "RWP timeout\n");
> >
> > Shouldn't we panic here?
> >
> > And if we are going to panic, we might as well wait forever? (Perhaps
> > with a warning after some number of iterations.
> >
>     Already after 1sec there is warning. I think warning is enough here
> this is not such a critical scenario

RWP timeout isn't critical? How does it get recovered?

> 
> [...]
> 
> >> +static u16 gicv3_compute_target_list(int *base_cpu, const struct cpumask 
> >> *mask,
> >> +                                   u64 cluster_id)
> >> +{
> >> +    int cpu = *base_cpu;
> >> +    u64 mpidr = cpu_logical_map(cpu);
> >> +    u16 tlist = 0;
> >> +
> >> +    while ( cpu < nr_cpu_ids )
> >> +    {
> >> +        /*
> >> +         * If we ever get a cluster of more than 16 CPUs, just
> >> +         * scream and skip that CPU.
> >> +         */
> >> +        if ( (mpidr & 0xff) >= 16 )
> >> +        {
> >> +            dprintk(XENLOG_WARNING, "Cluster more than 16's cpus\n");
> >> +            goto out;
> >> +        }
> >> +        tlist |= 1 << (mpidr & 0xf);
> >> +
> >> +        cpu = cpumask_next(cpu, mask);
> >
> > Aren't you essentially opencoding for_each_cpu with this check and the
> > while loop?
> >
> > The caller of this function already has a for_each_cpu in it. Can you
> > explain the algorithm please.
> 
>    Though the caller of this function calls for_each_cpu(), this 
> gicv3_compute_
> target_list() will update the *base_cpu always to the first cpu in the cluster
> So essentially this for_each_cpu() will loop once per cluster
> 
> >
> >> +        if ( cpu == nr_cpu_ids )
> >> +        {
> >> +            cpu--;
> >> +            goto out;
> >> +        }
> >> +
> >> +        mpidr = cpu_logical_map(cpu);
> >> +        if ( cluster_id != (mpidr & ~0xffUL) ) {
> >> +            cpu--;
> >> +            goto out;
> >> +        }
> >> +    }
> >> +out:
> >> +    *base_cpu = cpu;
> >> +    return tlist;
> >> +}
> >> +
> >> +    dsb();
> >
> > The dsb() macro needs a scope parameter for a while now. Which version
> > of Xen is all this based on?
> 
>   Based on Stefano's git where your scope parameter patch does not exist

Stefano's git has dozens of branches in it. This series will obviously
need to be based on mainline before it can be applied, I suppose that
means you are waiting for some series of his to be applied?

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.