[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[Xen-users] PCI Passthrough issue with e1000 driver on Fc8 domU


  • To: xen-users@xxxxxxxxxxxxxxxxxxx
  • From: Asim <linkasim@xxxxxxxxx>
  • Date: Tue, 12 Aug 2008 10:45:46 -0500
  • Delivery-date: Tue, 12 Aug 2008 08:46:19 -0700
  • Domainkey-signature: a=rsa-sha1; c=nofws; d=gmail.com; s=gamma; h=message-id:date:from:to:subject:mime-version:content-type :content-transfer-encoding:content-disposition; b=nAsSpj8FpJfrUIRBKMG5D59CEjo9fo2RhgDQt+wC2uAvE3mBSkuRRbvM5HFtr7W4N3 tI3jSOcUXARGpXN7yqLvzSuyZNYuc+JxzlbZQ7cqlvxkVu1Cwxldw6j8bKDmW2UkP5ta 62oS9frWNrGVw+W/GCy/iiz5Fb7TgkK/Ry4y0=
  • List-id: Xen user discussion <xen-users.lists.xensource.com>

Hi all,

I'm using a CentOS 4.6 dom0 running a Fedora Core FC8 domU in PV mode.
It is completely a 32 bit environment. I'm running Xen 3.2 and Linux
kernel is 2.6.18.8. I'm able to run this domU perfectly without any
issues. I'm using a e1000 card to perform pass-through I/O for my domU
but I'm unable to do so. The entire network environment is static I/P
based.

I performed the pci hiding by appropriately modifying the
modprobe.conf and now the e1000 driver does not load in dom0, nor does
the network interface show up in dom0. However, lspci does show the
device in dom0 and in domU, I don't have the lspci command. However,
whenever I pass this pci in my domU, the e1000 driver is automatically
loaded. The driver gets initialized correctly and the network
interface shows up. But I'm unable to communicate with the outside
world. I read through previous posts and saw that one needs to enable
swiotlb explicitly so in my dom0 in grub.conf I added the following
line:

       kernel /xen.gz swiotlb=256 noirqdebug

And my domU config file looks like :

kernel = "/boot/vmlinuz-2.6.18.8-xen"
ramdisk = "/boot/initrd-2.6.18.8-xen.img"
memory = 500
name = "fedora.fc8.64-domU"
disk = 'tap:aio:/store/images/fedora.fc8.img,sda1,w',
'tap:aio:/store/images/fedora.swap,sda2,w'
root = "/dev/sda1 ro"
pci = '01:0a.0'
extra= 'swiotlb=256,force xencons=tty'


I can see in my xm dmesg that the swiotlb gets initialized correctly
but still the domU fails to connect. As soon as I add an virtual
interface (vif) in my config file, the domU works (but obviously this
is not pass-through but using the bridge at dom0).


Some lines from dmesg:
.
.
.
Ä86510.715093Ü Software IO TLB enabled:
Ä86510.715095Ü  Aperture:     0 megabytes
Ä86510.715096Ü  Kernel range: c14e8000 - c15a8000
Ä86510.715097Ü  Address size: 24 bits
Ä86510.715561Ü vmalloc area: e0800000-f51fe000, maxmem 2d7fe000
Ä86510.722068Ü Memory: 498504k/520192k available (2350k kernel code,
13408k reserved, 1175k data, 212k init, 0k highmem)
.
.
.

[87078.045930] Intel(R) PRO/1000 Network Driver - version 7.1.9-k4-NAPI
[87078.045933] Copyright (c) 1999-2006 Intel Corporation.
[87078.064028] PCI: Enabling device 0000:00:00.0 (0000 -> 0003)
[87078.064394] e1000:netdev alloced successfully.
[87078.335358] e1000: 0000:00:00.0: e1000_probe: (PCI:33MHz:32-bit)
00:07:e9:39:07:e5
.
.
.


But the result of all of it is :

[root@fedora_pristine ~]# ping 128.105.104.103
PING 128.105.104.103 (128.105.104.103) 56(84) bytes of data.
>From 128.105.104.127 icmp_seq=1 Destination Host Unreachable
>From 128.105.104.127 icmp_seq=2 Destination Host Unreachable
>From 128.105.104.127 icmp_seq=3 Destination Host Unreachable

Also,

My route is set correctly (as vif works on it as well):

[root@fedora_pristine ~]# route -n
Kernel IP routing table
Destination     Gateway         Genmask         Flags Metric Ref    Use Iface
128.105.104.0   0.0.0.0         255.255.255.0   U     0      0        0 eth0
169.254.0.0     0.0.0.0         255.255.0.0     U     0      0        0 eth0
0.0.0.0         128.105.104.248 0.0.0.0         UG    0      0        0 eth0


Anyone please help. Also, if there are any additional ways to debug let me know.

Regards,
Asim

_______________________________________________
Xen-users mailing list
Xen-users@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-users


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.