[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

RE: [Xen-devel] [PATCH 2/3][RFC] MSI/MSI-X support fordom0/driver domain


  • To: "Tian, Kevin" <kevin.tian@xxxxxxxxx>, "Keir Fraser" <Keir.Fraser@xxxxxxxxxxxx>, <xen-devel@xxxxxxxxxxxxxxxxxxx>
  • From: "Jiang, Yunhong" <yunhong.jiang@xxxxxxxxx>
  • Date: Mon, 28 May 2007 22:03:05 +0800
  • Delivery-date: Mon, 28 May 2007 07:01:32 -0700
  • List-id: Xen developer discussion <xen-devel.lists.xensource.com>
  • Thread-index: Aceg/mz7PRs/d4jlSeu7V67d+hYswQACriSgAACTnuAAAZF9OQAB6/ZAAAEo6AYAAAay8AAA4oLCAAAAikAAAO1vKwAACgiQAAGmDRA=
  • Thread-topic: [Xen-devel] [PATCH 2/3][RFC] MSI/MSI-X support fordom0/driver domain

Thanks for the discussion! I will firstly use Keir's suggestion to let Xen to 
allocate the pirq.

Another point is, should we export vector to domain0/driver domain in long run? 

I think vector is an item for cpu, so that when interrupt happen, cpu will jump 
to IDT entry index by the vector. But for dom0/domU, it has no idea of IDT at 
all, so why should we export vector for domain0/domU? Is the pirq enough?
As for irq/pirq, I think irq is the index to irq_desc( stated at 
http://www.webservertalk.com/archive242-2006-5-1471415.html  and 
http://marc.info/?l=linux-kernel&m=110021870415938&w=2), while pirq is a 
virtual interrupt number  (something like gsi) injected by xen and is 
corresponding to physical interrupt source. What's special in domain0/domain U 
is, in normal kernel, the gsi/irq maybe different, while in domain0/domainU, we 
can arrange special so that irq/pirq is the same.

With this, for IOAPIC, domain0 will get a pirq from xen for a specific physical 
gsi,and then in io_apic_set_pci_routing(), the irq, instead of vector will be 
used. For msi(x), the  physdev_msi_format() will pass pirq/domain pair, and xen 
will return back content with vector information in it.

Not sure if my understanding is right. Also, I suspect even my understanding is 
right, should we do like this, since this will cause a lot of changes to 
ioapic-xen.c code (we may have less changes after domain0 switch to latest 
kernel).

Thanks
Yunhong Jiang



-----Original Message-----
From: Tian, Kevin 
Sent: 2007年5月28日 20:54
To: 'Keir Fraser'; Jiang, Yunhong; xen-devel@xxxxxxxxxxxxxxxxxxx
Subject: RE: [Xen-devel] [PATCH 2/3][RFC] MSI/MSI-X support fordom0/driver 
domain

>From: Keir Fraser [mailto:Keir.Fraser@xxxxxxxxxxxx]
>Sent: 2007年5月28日 20:41
>
>On 28/5/07 13:29, "Tian, Kevin" <kevin.tian@xxxxxxxxx> wrote:
>
>> I understand your point, and yes that's an easy implementation. My
>> small concern now is just whether it's worthy to pull Xen into resource
>> allocation for which Xen has nothing reference at all. Shouldn't the
>> components to assign device irq better does the allocation based on
>> its own policy? For current stage, HVM domain has device model to
>> provide 'pirq' layout and driver domainU has pciback. Even when later
>> there're other places to assign device irqs, I think it's still 
>> responsibility
>> of that place to construct the pirq name space for domU. For example,
>> how about the simple Xen pirq allocation policy doesn't satisfy the
>> special requirement of that place, like a special prime-number style
>> (just kidding)? If such simple, but no-use from Xen POV, interface
>> doesn't have users now and also may not address all possibilities in
>> the future, do we need that indeed?
>
>You may be right. I just like to keep the hypervisor interfaces as flexible
>as possible, to avoid unnecessarily baking in assumptions based on their
>initial usage. It's a pretty small issue actually, since we can get the same
>behaviour by dom0 attempting to map onto pirqs from zero upwards until
>it
>finds one that isn't already in use.
>
> -- Keir

Yep, exactly a small issue. Let's expect Yunhong's next version after 
incorporating your comments. :-)

Thanks,
Kevin

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.