[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] RE: [Xen-devel] [PATCH][RFC] Support more Capability StructuresandDevice Specific
I'm using x86_64 c/s 17888: 6ace85eb96c0, and assigning a 82541PI Gigabit Etherer NIC to guest. I also tried "pci=nomsi" for Dom0, and the issus is still there. When the issue happens, eth0 doesn't occur in /proc/interrupt though the device driver module is loaded. The issue doesn't happen every time. Really strange... Thanks, -- Dexuan -----Original Message----- From: Yuji Shimada [mailto:shimada-yxb@xxxxxxxxxxxxxxx] Sent: 2008年6月30日 16:15 To: Cui, Dexuan Cc: Ian Jackson; xen-devel@xxxxxxxxxxxxxxxxxxx; Dong, Eddie; Keir Fraser Subject: Re: [Xen-devel] [PATCH][RFC] Support more Capability StructuresandDevice Specific Hi Dexuan, I've tested my patch with CentOS 5.1 and PCI/PCIe NIC. In my test environment (with "pci=nomsi" set for Dom0 boot parameter), guest OS can use the assigned NIC and can communicate with external machine. Does guest OS recieve interrupt? You can check via /proc/interrupts. Thanks. -- Yuji Shimada > Hi Yuji, > I looked at the patch. It seems pretty good. > Except for the (temporary) absence of MSI/MSI-X stuff, looks the passthrough > policy in the patch is almost the same as what is discussed in the PDF file > Eddie posted. > > I also made some tests against the patch, and found there may be some > unstable issues: > I.e., when I boot a 32e RHEL5u1 (I add the "pci=nomsi" parameter)), it can > easily (30%~80% probable) stay for a very long (i.e., >40s) at "Starting > udev:", and after I login in shell, the NIC seems not present (the guest has > no network available), but "lspci" shows the NIC is there. > If I use the Qemu without your patch, the issue disappears at once, and NIC > in guest works well. > > I haven't found issue in your patch yet. :) > > Thanks, > -- Dexuan _______________________________________________ Xen-devel mailing list Xen-devel@xxxxxxxxxxxxxxxxxxx http://lists.xensource.com/xen-devel
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |