[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] Xen 4.2 Release Plan / TODO



On Thu, Mar 22, 2012 at 10:19 AM, Ian Campbell <Ian.Campbell@xxxxxxxxxx> wrote:
> On Thu, 2012-03-22 at 10:08 +0000, George Dunlap wrote:
>> On Thu, Mar 22, 2012 at 9:53 AM, Ian Campbell <Ian.Campbell@xxxxxxxxxx> 
>> wrote:
>> > On Thu, 2012-03-22 at 09:35 +0000, George Dunlap wrote:
>> >> On Mon, Mar 19, 2012 at 10:57 AM, Ian Campbell <Ian.Campbell@xxxxxxxxxx> 
>> >> wrote:
>> >> >      * xl compatibility with xm:
>> >> >              * feature parity wrt driver domain support (George Dunlap)
>> >> I just discovered (while playing with driver domains) that xl is
>> >> missing one bit of feature parity with xm for pci passthrough for PV
>> >> guests -- and that's the "pci quirk" config file support.  I'm going
>> >> to ask Intel if they have an interest in porting it over; I think it
>> >> should at least be a "nice-to-have", and it may be a low-level
>> >> blocker, as a lot of devices won't work passed through without it.
>> >
>> > This is the stuff in tools/python/xen/xend/server/pciquirk.py ?
>> >
>> > pciback in upstream doesn't mention "quirk" which suggests no support
>> > for the necessary sysfs node either?
>>
>> Ah, interesting -- that's worth tracking down.  Maybe there's a better
>> way to deal with quirks?  Or maybe it just hasn't been upstreamed yet
>> (or perhaps even implemented in pvops?).  I'm using the Debian squeeze
>> 2.6.32-5-xen-686 kernel.
>
> I told a lie -- the code does seem to be there in mainline
> (drivers/xen/xen-pciback/conf_space_quirks.c et al). Not sure how grep
> missed it.
>
> Does anyone know what the actual purpose/function of the single defined
> quirk is? 10845:df80de098d15 which introduces it doesn't really say,
> it's just a bunch of magic register frobbing as far as I'm concerned.
>
> I guess you have a tg3 and are suffering from this exact quirk?
>
> It's an awful lot of scaffolding on both the userspace and kernel side
> to support a generic quirks system which has had exactly one quirk since
> it was introduced in mid 2006. Perhaps we should just address the
> specific tg3 issue directly?

On the contrary, I don't have a tg3, but an Intel nic that uses
Linux's igd driver, and another that uses the bnx2 driver.  When I
pass those through to a PV guest, I get the following messages printed
from dom0's pciback, respectively:

(for igd)
[   77.619293] pciback 0000:07:00.0: PCI INT A -> GSI 45 (level, low)
-> IRQ 45^M
[   77.626683] pciback 0000:07:00.0: Driver tried to write to a
read-only configuration space field at offset 0xa8, size 2. This may
be harmless, but if you have problems with your device:^M
[   77.626685] 1) see permissive attribute in sysfs^M
[   77.626687] 2) report problems to the xen-devel mailing list along
with details of your device obtained from lspci.^M

(for bnx2)
[  363.582059] pciback 0000:02:00.0: PCI INT A -> GSI 32 (level, low)
-> IRQ 32^M
[  363.590050] pciback 0000:02:00.0: Driver tried to write to a
read-only configuration space field at offset 0x68, size 4. This may
be harmless, but if you have problems with your device:^M
[  363.590054] 1) see permissive attribute in sysfs^M
[  363.590055] 2) report problems to the xen-devel mailing list along
with details of your device obtained from lspci.^M

And at least one person has solved this by adding something to the
"quirks" file (search for "PCI permissions"):
http://technical.bestgrid.org/index.php/Xen:_assigning_PCI_devices_to_a_domain

The devices in fact don't work quite right, AFAICT.  So I'm gathering
that the "quirks" lists areas of PCI configuration space which it is
safe for pciback to allow the guest to modify.  (I'm going to hack
libxl to set "permissive" to test this theory.)

 -George

>
>>
>> > tools/examples/xend-pci-quirks.sxp  seems to only have a quirk for a
>> > single card?
>>
>> Yes, well I could add two more cards just from experience w/ one of my
>> test boxen. :-)
>>
>> > I don't think we want to implement an SXP parser for xl/libxl so if this
>> > is reimplemented I think a different format should be used.
>>
>> Since we're using yajl anyway, JSON might not be a bad option.
>>
>> Anyway, I'll ping the Intel guy who recently posted a patch to libxl_pci.c.
>>
>>  -George
>>
>> >
>> > Anyway, I'll put this onto the list.
>> >
>> > Ian
>> >
>> >>
>> >> >              * xl support for "rtc_timeoffset" and "localtime" (Lin
>> >> >                Ming, Patches posted)
>> >> >      * More formally deprecate xm/xend. Manpage patches already in
>> >> >        tree. Needs release noting and communication around -rc1 to
>> >> >        remind people to test xl.
>> >> >      * Domain 0 block attach & general hotplug when using qdisk backend
>> >> >        (need to start qemu as necessary etc) (Stefano S)
>> >> >      * file:// backend performance. qemu-xen-tradition's qdisk is quite
>> >> >        slow & blktap2 not available in upstream kernels. Need to
>> >> >        consider our options:
>> >> >              * qemu-xen's qdisk is thought to be well performing but
>> >> >                qemu-xen is not yet the default. Complexity arising from
>> >> >                splitting qemu-for-qdisk out from qemu-for-dm and
>> >> >                running N qemu's.
>> >> >              * potentially fully userspace blktap could be ready for
>> >> >                4.2
>> >> >              * use /dev/loop+blkback. This requires loop driver AIO and
>> >> >                O_DIRECT patches which are not (AFAIK) yet upstream.
>> >> >              * Leverage XCP's blktap2 DKMS work.
>> >> >              * Other ideas?
>> >> >      * Improved Hotplug script support (Roger Pau Monné, patches
>> >> >        posted)
>> >> >      * Block script support -- follows on from hotplug script (Roger
>> >> >        Pau Monné)
>> >> >
>> >> > hypervisor, nice to have:
>> >> >      * solid implementation of sharing/paging/mem-events (using work
>> >> >        queues) (Tim Deegan, Olaf Herring et al -- patches posted)
>> >> >              * "The last patch to use a waitqueue in
>> >> >                __get_gfn_type_access() from Tim works.  However, there
>> >> >                are a few users who call __get_gfn_type_access with the
>> >> >                domain_lock held. This part needs to be addressed in
>> >> >                some way."
>> >> >      * Sharing support for AMD (Tim, Andres).
>> >> >      * PoD performance improvements (George Dunlap)
>> >> >
>> >> > tools, nice to have:
>> >> >      * Configure/control paging via xl/libxl (Olaf Herring, lots of
>> >> >        discussion around interface, general consensus reached on what
>> >> >        it should look like)
>> >> >      * Upstream qemu feature patches:
>> >> >              * Upstream qemu PCI passthrough support (Anthony Perard,
>> >> >                patches sent)
>> >> >              * Upstream qemu save restore (Anthony Perard, Stefano
>> >> >                Stabellini, patches sent, waiting for upstream ack)
>> >> >      * Nested-virtualisation. Currently "experimental". Likely to
>> >> >        release that way.
>> >> >              * Nested SVM. Tested in a variety of configurations but
>> >> >                still some issues with the most important use case (w7
>> >> >                XP mode) [0]  (Christoph Egger)
>> >> >              * Nested VMX. Needs nested EPT to be genuinely useful.
>> >> >                Need more data on testedness etc (Intel)
>> >> >      * Initial xl support for Remus (memory checkpoint, blackholing)
>> >> >        (Shriram, patches posted, blocked behind qemu save restore
>> >> >        patches)
>> >> >      * xl compatibility with xm:
>> >> >              * xl support for autospawning vncviewer (vncviewer=1 or
>> >> >                otherwise) (Goncalo Gomes)
>> >> >              * support for vif "rate" parameter (Mathieu Gagné)
>> >> >
>> >> > [0] http://lists.xen.org/archives/html/xen-devel/2012-03/msg00883.html
>> >> >
>> >> >
>> >> > _______________________________________________
>> >> > Xen-devel mailing list
>> >> > Xen-devel@xxxxxxxxxxxxx
>> >> > http://lists.xen.org/xen-devel
>> >
>> >
>
>
>
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@xxxxxxxxxxxxx
> http://lists.xen.org/xen-devel

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.