[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH] dpci: Replace tasklet with an softirq (v4)



>>> On 10.09.14 at 22:26, <konrad@xxxxxxxxxx> wrote:
> The patch is simple - instead of scheduling an tasklet
> we schedule our own softirq - HVM_DPCI_SOFTIRQ, which will
> take care of running 'hvm_dirq_assist'. The information we need
> on each CPU is which 'struct domain' structure the 'hvm_dirq_assist'
> needs to run on. That is simple solved by threading the
> 'struct domain' through a linked list. The rule of only running
> one 'hvm_dirq_assist' for only one 'struct domain' is also preserved
> by having 'schedule_dpci_for' ignore any subsequent calls for
> an domain which has already been scheduled.

Suggested-by: ...

> Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@xxxxxxxxxx>
> ---
> v2: On top of ref-cnts also have wait loop for the outstanding
>     'struct domain' that need to be processed.
> v3: Add -ERETRY, fix up StyleGuide issues
> v4: Clean it up mode, redo per_cpu, this_cpu logic
> ---
>  xen/arch/x86/domain.c         |    4 +-
>  xen/drivers/passthrough/io.c  |  168 ++++++++++++++++++++++++++++++++++++++--
>  xen/drivers/passthrough/pci.c |   24 +++++--
>  xen/include/xen/hvm/irq.h     |    2 +-
>  xen/include/xen/pci.h         |    2 +-
>  xen/include/xen/sched.h       |    6 ++
>  xen/include/xen/softirq.h     |    3 +

It also looks like you failed to Cc a couple of people, even if the
changes outside of xen/drivers/passthrough/ are small.

> --- a/xen/drivers/passthrough/io.c
> +++ b/xen/drivers/passthrough/io.c
> @@ -20,14 +20,123 @@
>  
>  #include <xen/event.h>
>  #include <xen/iommu.h>
> +#include <xen/cpu.h>
>  #include <xen/irq.h>
>  #include <asm/hvm/irq.h>
>  #include <asm/hvm/iommu.h>
>  #include <asm/hvm/support.h>
>  #include <xen/hvm/irq.h>
> -#include <xen/tasklet.h>
>  
> -static void hvm_dirq_assist(unsigned long _d);
> +static void hvm_dirq_assist(struct domain *d);

I think by not placing some (most?) of your new code at the top of
the file, you could avoid the need for this declaration altogether.

> +static void schedule_dpci_for(struct domain *d)
> +{
> +    if ( !test_and_set_bit(STATE_SCHED, &d->state) )
> +    {
> +        unsigned long flags;
> +        struct list_head *list;
> +        int cpu = smp_processor_id();
> +
> +        INIT_LIST_HEAD(&d->list);

Why does this need doing on each schedule operation?

> +
> +        get_knownalive_domain(d); /* To be put by 'dpci_softirq' */
> +
> +        local_irq_save(flags);
> +        list = &per_cpu(dpci_list, cpu);
> +        list_add_tail(&d->list, list);
> +        local_irq_restore(flags);

I doubt that the avoidance of this_cpu() here really buys us much. In
fact I think the "list" local variable is quite pointless (hampering
readability) too.

Furthermore, with this much of restructuring I wonder whether the
domain centric model here really is the right one: All hvm_dirq_assist()
does is iterate over all IRQs, yet the raising side already knows which
struct hvm_pirq_dpci instance is of interest. (It was quite odd for the
old code to have a per-IRQ tasklet, but have the tasklet action then
iterate over all IRQs for the domain.) Such a switch would seem to
have the potential of reducing the complexity in dpci_softirq().

(As a side note, _hvm_dirq_assist() seems poorly coded too: The
loop there does a number of loop-invariant things. I'll try to
remember to see whether this can be done in a less convoluted
way.)

> +
> +        raise_softirq(HVM_DPCI_SOFTIRQ);
> +    }
> +}
> +
> +int dpci_kill(struct domain *d)
> +{
> +    if ( test_and_set_bit(STATE_SCHED, &d->state) )
> +        return -EAGAIN;
> +
> +    if ( test_bit(STATE_RUN, &d->state) )
> +        return -EAGAIN;
> +
> +    clear_bit(STATE_SCHED, &d->state);
> +
> +    return 0;
> +}
> +
> +static void dpci_softirq(void)
> +{
> +    struct domain *d;
> +    int cpu = smp_processor_id();
> +    struct list_head *list;

Same here - pretty pointless local variable, to some degree hiding what
the actual operation is. "cpu" otoh, being used more than once here, is
certainly useful to retain. But please all of those variables throughout
the patch should be "unsigned int".

> +    struct list_head our_list;
> +
> +    INIT_LIST_HEAD(&our_list);

Please use LIST_HEAD() instead.

> +
> +    local_irq_disable();
> +    list = &per_cpu(dpci_list, cpu);
> +    list_splice_init(list, &our_list);
> +    local_irq_enable();
> +
> +    while ( !list_empty(&our_list) )
> +    {
> +        d = list_entry(our_list.next, struct domain, list);
> +        list_del(&d->list);
> +
> +        if ( !test_and_set_bit(STATE_RUN, &d->state) )
> +        {
> +            if ( !test_and_clear_bit(STATE_SCHED, &d->state) )
> +                BUG();
> +            hvm_dirq_assist(d);
> +            clear_bit(STATE_RUN, &d->state);
> +            put_domain(d);
> +            continue;
> +        }
> +
> +        INIT_LIST_HEAD(&d->list);

And again - why?

> -void pci_release_devices(struct domain *d)
> +int pci_release_devices(struct domain *d)
>  {
>      struct pci_dev *pdev;
>      u8 bus, devfn;
> +    int ret;
>  
>      spin_lock(&pcidevs_lock);
> -    pci_clean_dpci_irqs(d);
> +    ret = pci_clean_dpci_irqs(d);
> +    if ( ret && ret == -EAGAIN )
> +        return ret;

Just "if ( ret == -EAGAIN )" would have the same effect afaict.

> --- a/xen/include/xen/sched.h
> +++ b/xen/include/xen/sched.h
> @@ -456,6 +456,12 @@ struct domain
>      /* vNUMA topology accesses are protected by rwlock. */
>      rwlock_t vnuma_rwlock;
>      struct vnuma_info *vnuma;
> +
> +#ifdef HAS_PASSTHROUGH
> +    /* For HVM_DPCI_SOFTIRQ. We use refcnt when scheduled to run. */
> +    struct list_head list;
> +    unsigned long state;
> +#endif

These either need a name prefix or putting into a suitably named
container struct (I'd personally prefer the latter). All of course only
if the domain centric model needs to be retained for some reason.

Jan

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.