[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[Xen-devel] RE: [rfc 1/2] pt_irq_time_out() should act on all machine_irq



Simon Horman wrote:
> In pt_irq_time_out() the following code loops through all used
> guest_gsi:
> 
>     list_for_each_entry ( digl, &irq_map->digl_list, list )
>     {
>         guest_gsi = digl->gsi;
>         machine_gsi = dpci->girq[guest_gsi].machine_gsi;
>       ...
>     }
> 
> And a little later on machine_gsi is used.
> That is the last machine_gsi found is used,
> rather than all of the machine_gsi that are found.
> 
> This seems to be incorrect to me,
> but I am unsure of how to test this.
>

Timer is set for each machine GSI, so all the machine_gsi found in loop are the 
same. More than one devices may share machine GSI, but assume they won't share 
guest gsi. digl_list contains all guest GSIs which are correspond to this a 
machine GSI.

Now you want pass-throughed devices share guest gsi, you needs to change it 
obviously. Your below change looks fine for me.

Regards,
Weidong

 
> This code appears to have been introduced in
> "vt-d: Support intra-domain shared interrupt" by Weidong Han.
> 
> Cc: Weidong Han <weidong.han@xxxxxxxxx>
> Cc: Yuji Shimada <shimada-yxb@xxxxxxxxxxxxxxx>
> Signed-off-by: Simon Horman <horms@xxxxxxxxxxxx>
> 
> Index: xen-unstable.hg/xen/drivers/passthrough/io.c
> ===================================================================
> --- xen-unstable.hg.orig/xen/drivers/passthrough/io.c 2009-03-09
> 12:44:48.000000000 +1100 +++
> xen-unstable.hg/xen/drivers/passthrough/io.c  2009-03-09
>      12:58:28.000000000 +1100 @@ -37,6 +37,9 @@ static void
>      pt_irq_time_out(void *data) struct hvm_irq_dpci *dpci = NULL;
>      struct dev_intx_gsi_link *digl; uint32_t device, intx;
> +    DECLARE_BITMAP(machine_gsi_map, NR_IRQS);
> +
> +    bitmap_zero(machine_gsi_map, NR_IRQS);
> 
>      spin_lock(&irq_map->dom->event_lock);
> 
> @@ -46,16 +49,31 @@ static void pt_irq_time_out(void *data)
>      {
>          guest_gsi = digl->gsi;
>          machine_gsi = dpci->girq[guest_gsi].machine_gsi;
> +        set_bit(machine_gsi, machine_gsi_map);
>          device = digl->device;
>          intx = digl->intx;
>          hvm_pci_intx_deassert(irq_map->dom, device, intx);
>      }
> 
> -    clear_bit(machine_gsi, dpci->dirq_mask);
> -    vector = domain_irq_to_vector(irq_map->dom, machine_gsi);
> -    dpci->mirq[machine_gsi].pending = 0;
> +    for ( machine_gsi = find_first_bit(machine_gsi_map, NR_IRQS);
> +          machine_gsi < NR_IRQS;
> +          machine_gsi = find_next_bit(machine_gsi_map, NR_IRQS,
> +                                      machine_gsi + 1) )
> +    {
> +        clear_bit(machine_gsi, dpci->dirq_mask);
> +        vector = domain_irq_to_vector(irq_map->dom, machine_gsi);
> +        dpci->mirq[machine_gsi].pending = 0;
> +    }
> +
>      spin_unlock(&irq_map->dom->event_lock);
> -    pirq_guest_eoi(irq_map->dom, machine_gsi);
> +
> +    for ( machine_gsi = find_first_bit(machine_gsi_map, NR_IRQS);
> +          machine_gsi < NR_IRQS;
> +          machine_gsi = find_next_bit(machine_gsi_map, NR_IRQS,
> +                                      machine_gsi + 1) )
> +    {
> +        pirq_guest_eoi(irq_map->dom, machine_gsi);
> +    }
>  }
> 
>  int pt_irq_create_bind_vtd(
> 
> --


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.