[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[Xen-users] DomU access to dual-ported memory area on PCI card


  • To: Xen-users@xxxxxxxxxxxxxxxxxxx
  • From: "David MÃller (ELSOFT AG)" <d.mueller@xxxxxxxxx>
  • Date: Fri, 12 Aug 2011 15:53:25 +0200
  • Cc:
  • Delivery-date: Fri, 12 Aug 2011 06:55:32 -0700
  • List-id: Xen user discussion <xen-users.lists.xensource.com>

Hello

I have some troubles to access a dual-ported memory are on a PCI card
using the Linux uio driver framework.

The PCI passthru of the PCI card from Dom0 to DomU seems to be ok as the
uio drivers are successfully loaded and the physical address and length
of the DPM area are available under "/sys/class/uio/uio0/maps/map0/".

Both Dom0 + DomU are based on Debian 6.0.2 (Squezze)
Linux-2.6.32-5-xen-686 / Xen-4.0.1-2

I try to access the DPM area like this (pseudo code):

 fd = open("/dev/uio0");
 dpm_addr = uio_get_addr("/sys/class/uio/uio0/maps/map0/addr");
 dpm_size = uio_get_size("/sys/class/uio/uio0/maps/map0/size");

 dpm_base = mmap(NULL, dpm_size, PROT_READ | PROT_WRITE,
                 MAP_LOCKED | MAP_POPULATE | MAP_SHARED,
                 fd, 0);

 readl(dpm_base);


Although the "mmap()" function does not report an error, the following
is printed to the DomU console:

[  357.817723] 1 multicall(s) failed: cpu 0
[  357.820021] Pid: 636, comm: map_dpm Not tainted 2.6.32-5-xen-686 #1
[  357.820021] Call Trace:
[  357.820021]  [<c10042e5>] ? xen_mc_flush+0xa2/0x150
[  357.820021]  [<c1005051>] ? xen_leave_lazy_mmu+0x5/0xa
[  357.820021]  [<c10a4910>] ? remap_pfn_range+0x286/0x303
[  357.820021]  [<c892846e>] ? uio_mmap+0xbc/0xe1 [uio]
[  357.820021]  [<c10a81ed>] ? mmap_region+0x267/0x443
[  357.820021]  [<c109e10a>] ? sys_mmap_pgoff+0xc8/0x147
[  357.820021]  [<c128fb5c>] ? do_debug+0x117/0x126
[  357.820021]  [<c1008f7c>] ? syscall_call+0x7/0xb
[  357.820021] ------------[ cut here ]------------
[  357.820021] WARNING: at
/tmp/buildd/linux-2.6-2.6.32/debian/build/source_i386_xen/arch/x86/xen/multicalls.c:182
xen_leave_lazy_mmu+0x5/0xa()
[  357.820021] Modules linked in: evdev snd_pcm snd_timer snd soundcore
uio_testdrv snd_page_alloc uio pcspkr ext3 jbd mbcache xen_blkfront
xen_netfront
[  357.820021] Pid: 636, comm: map_dpm Not tainted 2.6.32-5-xen-686 #1
[  357.820021] Call Trace:
[  357.820021]  [<c1005051>] ? xen_leave_lazy_mmu+0x5/0xa
[  357.820021]  [<c1005051>] ? xen_leave_lazy_mmu+0x5/0xa
[  357.820021]  [<c1037819>] ? warn_slowpath_common+0x5e/0x8a
[  357.820021]  [<c103784f>] ? warn_slowpath_null+0xa/0xc
[  357.820021]  [<c1005051>] ? xen_leave_lazy_mmu+0x5/0xa
[  357.820021]  [<c10a4910>] ? remap_pfn_range+0x286/0x303
[  357.820021]  [<c892846e>] ? uio_mmap+0xbc/0xe1 [uio]
[  357.820021]  [<c10a81ed>] ? mmap_region+0x267/0x443
[  357.820021]  [<c109e10a>] ? sys_mmap_pgoff+0xc8/0x147
[  357.820021]  [<c128fb5c>] ? do_debug+0x117/0x126
[  357.820021]  [<c1008f7c>] ? syscall_call+0x7/0xb
[  357.820021] ---[ end trace 2aefe407d15d45f9 ]---

The Xen Hypervisor (xm dmesg) reports the following message:

(XEN) mm.c:798:d1 Non-privileged (1) attempt to map I/O space 7fffffff

The read access to the DPM results in a crash.


If i run the same program in Dom0, the "mmap()" function call is ok, but
the access to the DPM area results in the following:

Dom0 console:

[  309.161135] map_dpm: Corrupted page table at address b7fdc000
[  309.161213] *pdpt = 00000000021df001 *pde = 0000000001f35067 *pte =
80000ffffffff237
[  309.161442] Bad pagetable: 000d [#1] SMP
[  309.161601] last sysfs file: /sys/hypervisor/version/minor
[  309.161674] Modules linked in: uio_testdrv uio xt_state xt_physdev
iptable_filter bridge stp cpufreq_powersave cpufreq_userspace
cpufreq_stats cpufreq_conservative xt_tcpudp ipt_MASQUERADE iptable_nat
nf_nat nf_conntrack_ipv4 nf_conntrack nf_defrag_ipv4 ip_tables x_tables
xen_evtchn xenfs loop firewire_sbp2 i915 drm_kms_helper drm snd_pcm
i2c_algo_bit snd_timer video snd rng_core i2c_i801 output i2c_core
soundcore snd_page_alloc parport_pc evdev parport pcspkr psmouse
serio_raw processor button acpi_processor ext3 jbd mbcache sd_mod
crc_t10dif ata_generic uhci_hcd ata_piix firewire_ohci libata thermal
firewire_core crc_itu_t ehci_hcd thermal_sys scsi_mod usbcore nls_base
e1000e [last unloaded: scsi_wait_scan]
[  309.165103]
[  309.165103] Pid: 1793, comm: map_dpm Not tainted (2.6.32-5-xen-686
#1) PIP
[  309.165103] EIP: 0073:[<0804877b>] EFLAGS: 00210392 CPU: 1
[  309.165103] EIP is at 0x804877b
[  309.165103] EAX: 00000015 EBX: b7fdc000 ECX: bffffc58 EDX: b7fd6340
[  309.165103] ESI: 00001000 EDI: fe9f0000 EBP: bffffcc8 ESP: bffffc70
[  309.165103]  DS: 007b ES: 007b FS: 0000 GS: 0033 SS: 007b
[  309.165103] Process map_dpm (pid: 1793, ti=c1c04000 task=c22a6640
task.ti=c1c04000)
[  309.165103]
[  309.165103] EIP: [<0804877b>] 0x804877b SS:ESP 007b:bffffc70
[  309.165103] ---[ end trace 0febb3eb16111c04 ]---


"xm dmesg" output:

(XEN) d0:v1: reserved bit in page table (ec=000D)
(XEN) Pagetable walk from b7fdc000:
(XEN)  L3[0x002] = 000000003a1df001 000021df
(XEN)  L2[0x1bf] = 0000000039f35067 00001f35
(XEN)  L1[0x1dc] = 800007fffffff237 ffffffff
(XEN) ----[ Xen-4.0.1  x86_32p  debug=n  Not tainted ]----
(XEN) CPU:    0
(XEN) EIP:    0073:[<0804877b>]
(XEN) EFLAGS: 00210392   EM: 0   CONTEXT: pv guest
(XEN) eax: 00000015   ebx: b7fdc000   ecx: bffffc58   edx: b7fd6340
(XEN) esi: 00001000   edi: fe9f0000   ebp: bffffcc8   esp: bffffc70
(XEN) cr0: 8005003b   cr4: 000026f0   cr3: 3a160000   cr2: b7fdc000
(XEN) ds: 007b   es: 007b   fs: 0000   gs: 0033   ss: 007b   cs: 0073
(XEN) Guest stack trace from esp=bffffc70:
(XEN)    08048926 b7fdc000 00001000 00001000 b7f96b99 b7ebf685 bffffc98
00000005
(XEN)    b7fd4ff4 08049a44 bffffca8 7665642f 6f69752f 08040030 bffffcd8
00010000
(XEN)    b7fd5304 b7fd4ff4 bffffce0 b7fd4ff4 00000000 00000000 bffffd58
b7ea6c76
(XEN)    080487d0 00000000 bffffd58 b7ea6c76 00000001 bffffd84 bffffd8c
b7fe06e0
(XEN)    bffffd40 ffffffff b7ffeff4 08048357 00000001 bffffd40 b7ff0626
b7fffab0
(XEN)    b7fe09d0 b7fd4ff4 00000000 00000000 bffffd58 d20c12c4 f92da4d4
00000000
(XEN)    00000000 00000000 00000001 08048580 00000000 b7ff6210 b7ea6b9b
b7ffeff4
(XEN)    00000001 08048580 00000000 080485a1 0804868c 00000001 bffffd84
080487d0
(XEN)    080487c0 b7ff1040 bffffd7c b7fff8f8 00000001 bffffe97 00000000
bffffebf
(XEN)    bffffeca bffffeda bffffeea bffffef4 bffffeff bfffff41 bfffff55
bfffff64
(XEN)    bfffff88 bfffff99 bfffffa2 bfffffad bfffffb5 bfffffc7 00000000
00000020
(XEN)    b7fe2414 00000021 b7fe2000 00000010 1fc9d3f5 00000006 00001000
00000011
(XEN)    00000064 00000003 08048034 00000004 00000020 00000005 00000007
00000007
(XEN)    b7fe3000 00000008 00000000 00000009 08048580 0000000b 00000000
0000000c
(XEN)    00000000 0000000d 00000000 0000000e 00000000 00000017 00000000
00000019
(XEN)    bffffe7b 0000001f bfffffd4 0000000f bffffe8b 00000000 00000000
00000000
(XEN)    00000000 00000000 66000000 e9f3306d bedd96fa 51a7241e 69a11614
00363836
(XEN)    00000000 2f000000 656d6f68 6573752f 69632f72 652f5866 706d6178
6d2f656c
(XEN)    645f7061 6d2f6d70 645f7061 54006d70 3d4d5245 756e696c 48530078
3d4c4c45
(XEN)    6e69622f 7361622f 55480068 4f4c4853 3d4e4947 534c4146 53550045
723d5245


The same program runs fine if the Xen Hypervisor is disabled completely.


Any idea(s) what is going on?


Dave

_______________________________________________
Xen-users mailing list
Xen-users@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-users


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.