[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

RE: [Xen-devel] [RFC][PATCH 0/6] HVM PCI Passthrough (non-IOMMU)



Hi Jhon,

Thanks for testing out our patches!
My comments below.

> -----Original Message-----
> From: John Byrne [mailto:john.l.byrne@xxxxxx] 
> Sent: Friday, June 08, 2007 5:53 AM
> To: Guy Zana
> Cc: xen-devel@xxxxxxxxxxxxxxxxxxx
> Subject: Re: [Xen-devel] [RFC][PATCH 0/6] HVM PCI Passthrough 
> (non-IOMMU)
> 
> 
> Guy,
> 
> I tried your patches with a bnx2 NIC on SLES10 and they didn't work.
> 
> The first reason was that you mask off the capabilities bit 
> in the PCI status. If I got rid of this, I could at least get 
> the NIC to configure, but it didn't work and the dropped 
> packets looked to be random garbage, so I don't think it was 
> talking to the device properly. (But I understand almost 
> nothing about PCI device configuration, so I don't know what 
> to look for.)
> 

The released patches are considered to be "developmental", there are still work 
needed to be done (not too much though :) ) in order to make it usable for 
everyone. Are you sure you mapped the right IRQ? Please post the qemu-dm log 
file / xm dmesg. The capabilities bits are masked-off so we won't need to 
handle MSIs yet and power management (ACPI) related stuff, that could be quite 
a pain when trying to do pass-through for integrated devices.

Another thing, 
Does this NIC card has an expansion ROM?

> I haven't noticed the merge tree springing into existence 
> into on xenbits, so is there any progress on making into a 
> real feature? It sounds like most of the work needs to be 
> done between you and Intel, but I could certainly help with testing.
> 

That would be great!

I think that both patches (ours' and Intel's) need some more work before we can 
start merging.
Neocleus already merged some parts from the Intel patches (mmio & pio 
handling). We are also aiming for 64bits (x86) support on the next release.

> One thing I am interested in is, with the 1:1 mapping, could 
> we disable the VT page-fault handling? I've found that the 
> page-fault overhead for VT is horrible and would probably 
> affect fork-exec benchmarks significantly.

Cool idea! Our CTO thought about it as well :)
It's kind of hard not to use the VT page-fault handler at all, there are some 
issues with memory protection (security), and memory-remapping that we would 
want to do in the future (In order to support bios & expansion ROM 
duplication). I agree that you can make it faster though! it may require some 
drastic changes in the hypervisor.

Thanks,
Guy.

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.