[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] upstream merge status for 2.6.35, .36?



Hi Konrad,

Congratulations from me as well, I would like to try your rebased tree, so give 
a signal when the rebasing is finished.
In the outstanding issues i'm missing the thing with pvgrub not working when 
using pci-passthrough.
Thx again for your hard work, makes domU support in mainline much more complete 
:-)

--

Sander


Monday, June 7, 2010, 4:57:43 PM, you wrote:

>> > swiotlb seems to be in linux-next now.. Congratulations!

> Wheew, it took more than time than I anticipated, but yes!. Thank you.
>> 
>> Yes, http://lkml.org/lkml/2010/6/5/71
>> 
>> Now that looks exceedingly smooth, but if you look at the date on
>> http://lkml.org/lkml/2009/5/11/223 ... on the bright side, the new swiotlb

> So the SWIOTLB is 1 out 3. The next component is:

> 2). Xen SWIOTLB. This is the xen swiotlb code that utilizes the swiotlb
> proper that was just made generic enough to be used in this capacity.
> git://git.kernel.org/pub/scm/linux/kernel/git/konrad/swiotlb-2.6.git 
> xen-swiotlb-0.8.2

> 3). and then the Xen PCI front. Which utilizes the Xen-SWIOTLB (and also
> the Xen PCI), to well, allow guests to have PCI devices passed in.

> git://git.kernel.org/pub/scm/linux/kernel/git/konrad/xen.git 
> pv/pcifront-2.6.34

> The 2) and 3) are mostly Xen specific so they should be much more palpable
> than the first one.

>> branch is both peer-reviewed and user-tested in xen/stable-2.6.32.x AFAICT,

> Kind of. The pcifront-2.6.34 is definitly in xen/stable-2.6.32.x. The
> SWIOTLB + Xen-SWIOTLB system in 2.6.32 is uhh, swiotlb-0.3 or so I
> think. So the earlier ideas on how to make it work - but I have to
> stress the majority of the changes between 0.3 and 0.8.3 is in the
> facade - the underlaying code that does the translation has been
> unchanged. And _all_ of the bugs in translation have been fixed (we had
> a nasty one at the beginning that fortunatly is fixed).

> Also some wild adventerous folks have been taking the 
> git://git.kernel.org/pub/scm/linux/kernel/git/konrad/xen.git pv/merge.2.6.3[x]

> where X: 32,33,34 and testing it - which has all of those patches
> (SWIOTLB 0.8.3  + Xen SWIOTLB 0.8 + Xen PCI Front 2.6.34) integrated in.

> Which reminds me, I need to rebase those once more and annonce it to
> xen-devel to see if anybody is interested in running them and having
> their name enshrined as 'Tested-by: XX' in the git commits.

>> so the end-result should be bulletproof (as much as it can be :).

> There are some outstanding issues that we know of. I hadn't yet gotten
> my head around them, but here is a list of Xen PCI frontend bugs:

> 1). Pass in 4GB or more to DomU. All the memory that the guest sees are
> RAM and there are no "holes" for the PCI devices, akin to what you have
> on a normal machine (the hole is 256MB and it shifts 256MB of RAM above
> the 4GB - we don't do that yet in DomU). Workaround: use less memory, or
> some magic Linux kernel parameter (memhole?) to create a hole.

> Xen PCI backend: 

> 1) if you have CONFIG_LOCKDEP enabled.
> There is a bug in how the XenPCI Back driver interacts with the XenBus
> that triggers a lockdependecy warning. It is a problem that hasn't been
> addressed yet, but it should not affect everyday usage of PCI cards.

> 2). xl toolstack is still experimental. Jeremy has been taking a crack
> at it and fixed a lot of the issues, but I haven't seen a green light
> from him - so to be on a safe side you might want to use 'xm' stack.

> 3) Unclean shutdown of DomU with MSI devices. If you kill the guest
> outright without making it unload the drivers the PCI device, if it
> uses MSI/MSI-X, might suddenly start sending an IRQ storm. I haven't
> tracked this down yet.





-- 
Best regards,
 Sander                            mailto:linux@xxxxxxxxxxxxxx


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.