[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] Xen 4.3 PCI passthrough possible bug



Much. Do I need to install from src or is there a package I can install.

Regards


On Fri, Feb 7, 2014 at 1:30 PM, Konrad Rzeszutek Wilk <konrad.wilk@xxxxxxxxxx> wrote:
On Fri, Feb 07, 2014 at 10:53:22AM -0500, Mike Neiderhauser wrote:
> I did not.  I do not have the toolchain installed.  I may have time later
> today to try the patch.  Are there any specific instructions on how to
> patch the src, compile and install?

There actually should be a new version of Xen 4.4-rcX which will have the
fix. That might be easier for you?
>
> Regards
>
>
> On Fri, Feb 7, 2014 at 10:25 AM, Konrad Rzeszutek Wilk <
> konrad.wilk@xxxxxxxxxx> wrote:
>
> > On Thu, Feb 06, 2014 at 09:39:37AM -0500, Mike Neiderhauser wrote:
> > > Hi all,
> > >
> > > I am attempting to do a pci passthrough of an Intel ET card (4x1G NIC)
> > to a
> > > HVM.  I have been attempting to resolve this issue on the xen-users list,
> > > but it was advised to post this issue to this list. (Initial Message -
> > >
> > http://lists.xenproject.org/archives/html/xen-users/2014-02/msg00036.html)
> > >
> > > The machine I am using as host is a Dell Poweredge server with a Xeon
> > > E31220 with 4GB of ram.
> > >
> > > The possible bug is the following:
> > > root@fiat:/var/log/xen# cat qemu-dm-ubuntu-hvm-0.log
> > > char device redirected to /dev/pts/5 (label serial0)
> > > qemu: hardware error: xen: failed to populate ram at 40030000
> > > ....
> > >
> > > I believe it may be similar to this thread
> > >
> > http://markmail.org/message/3zuiojywempoorxj#query:+page:1+mid:gul34vbe4uyog2d4+state:results
> > >
> > >
> > > Additional info that may be helpful is below.
> >
> > Did you try the patch?
> > >
> > > Please let me know if you need any additional information.
> > >
> > > Thanks in advance for any help provided!
> > > Regards
> > >
> > > ###########################################################
> > > root@fiat:~# cat /etc/xen/ubuntu-hvm-0.cfg
> > > ###########################################################
> > > # Configuration file for Xen HVM
> > >
> > > # HVM Name (as appears in 'xl list')
> > > name="ubuntu-hvm-0"
> > > # HVM Build settings (+ hardware)
> > > #kernel = "/usr/lib/xen-4.3/boot/hvmloader"
> > > builder='hvm'
> > > device_model='qemu-dm'
> > > memory=1024
> > > vcpus=2
> > >
> > > # Virtual Interface
> > > # Network bridge to USB NIC
> > > vif=['bridge=xenbr0']
> > >
> > > ################### PCI PASSTHROUGH ###################
> > > # PCI Permissive mode toggle
> > > #pci_permissive=1
> > >
> > > # All PCI Devices
> > > #pci=['03:00.0', '03:00.1', '04:00.0', '04:00.1', '05:00.0', '05:00.1']
> > >
> > > # First two ports on Intel 4x1G NIC
> > > #pci=['03:00.0','03:00.1']
> > >
> > > # Last two ports on Intel 4x1G NIC
> > > #pci=['04:00.0', '04:00.1']
> > >
> > > # All ports on Intel 4x1G NIC
> > > pci=['03:00.0', '03:00.1', '04:00.0', '04:00.1']
> > >
> > > # Brodcom 2x1G NIC
> > > #pci=['05:00.0', '05:00.1']
> > > ################### PCI PASSTHROUGH ###################
> > >
> > > # HVM Disks
> > > # Hard disk only
> > > # Boot from HDD first ('c')
> > > boot="c"
> > > disk=['phy:/dev/ubuntu-vg/ubuntu-hvm-0,hda,w']
> > >
> > > # Hard disk with ISO
> > > # Boot from ISO first ('d')
> > > #boot="d"
> > > #disk=['phy:/dev/ubuntu-vg/ubuntu-hvm-0,hda,w',
> > > 'file:/root/ubuntu-12.04.3-server-amd64.iso,hdc:cdrom,r']
> > >
> > > # ACPI Enable
> > > acpi=1
> > > # HVM Event Modes
> > > > > > > > > > > > > > >
> > > # Serial Console Configuration (Xen Console)
> > > sdl=0
> > > serial='pty'
> > >
> > > # VNC Configuration
> > > # Only reacable from localhost
> > > vnc=1
> > > vnclisten="0.0.0.0"
> > > vncpasswd=""
> > >
> > > ###########################################################
> > > Copied for xen-users list
> > > ###########################################################
> > >
> > > It appears that it cannot obtain the RAM mapping for this PCI device.
> > >
> > >
> > > I rebooted the Host.  I ran assigned pci devices to pciback. The output
> > > looks like:
> > > root@fiat:~# ./dev_mgmt.sh
> > > Loading Kernel Module 'xen-pciback'
> > > Calling function pciback_dev for:
> > > PCI DEVICE 0000:03:00.0
> > > Unbinding 0000:03:00.0 from igb
> > > Binding 0000:03:00.0 to pciback
> > >
> > > PCI DEVICE 0000:03:00.1
> > > Unbinding 0000:03:00.1 from igb
> > > Binding 0000:03:00.1 to pciback
> > >
> > > PCI DEVICE 0000:04:00.0
> > > Unbinding 0000:04:00.0 from igb
> > > Binding 0000:04:00.0 to pciback
> > >
> > > PCI DEVICE 0000:04:00.1
> > > Unbinding 0000:04:00.1 from igb
> > > Binding 0000:04:00.1 to pciback
> > >
> > > PCI DEVICE 0000:05:00.0
> > > Unbinding 0000:05:00.0 from bnx2
> > > Binding 0000:05:00.0 to pciback
> > >
> > > PCI DEVICE 0000:05:00.1
> > > Unbinding 0000:05:00.1 from bnx2
> > > Binding 0000:05:00.1 to pciback
> > >
> > > Listing PCI Devices Available to Xen
> > > 0000:03:00.0
> > > 0000:03:00.1
> > > 0000:04:00.0
> > > 0000:04:00.1
> > > 0000:05:00.0
> > > 0000:05:00.1
> > >
> > > ###########################################################
> > > root@fiat:~# xl -vvv create /etc/xen/ubuntu-hvm-0.cfg
> > > Parsing config from /etc/xen/ubuntu-hvm-0.cfg
> > > WARNING: ignoring device_model directive.
> > > WARNING: Use "device_model_override" instead if you really want a
> > > non-default device_model
> > > libxl: debug: libxl_create.c:1230:do_domain_create: ao 0x210c360: create:
> > > how=(nil) callback=(nil) poller=0x210c3c0
> > > libxl: debug: libxl_device.c:257:libxl__device_disk_set_backend: Disk
> > > vdev=hda spec.backend=unknown
> > > libxl: debug: libxl_device.c:296:libxl__device_disk_set_backend: Disk
> > > vdev=hda, using backend phy
> > > libxl: debug: libxl_create.c:675:initiate_domain_create: running
> > bootloader
> > > libxl: debug: libxl_bootloader.c:321:libxl__bootloader_run: not a PV
> > > domain, skipping bootloader
> > > libxl: debug: libxl_event.c:608:libxl__ev_xswatch_deregister: watch
> > > w=0x210c728: deregister unregistered
> > > libxl: debug: libxl_numa.c:475:libxl__get_numa_candidate: New best NUMA
> > > placement candidate found: nr_nodes=1, nr_cpus=4, nr_vcpus=3,
> > > free_memkb=2980
> > > libxl: detail: libxl_dom.c:195:numa_place_domain: NUMA placement
> > candidate
> > > with 1 nodes, 4 cpus and 2980 KB free selected
> > > xc: detail: elf_parse_binary: phdr: paddr=0x100000 memsz=0xa69a4
> > > xc: detail: elf_parse_binary: memory: 0x100000 -> 0x1a69a4
> > > xc: info: VIRTUAL MEMORY ARRANGEMENT:
> > >   Loader:        0000000000100000->00000000001a69a4
> > >   Modules:       0000000000000000->0000000000000000
> > >   TOTAL:         0000000000000000->000000003f800000
> > >   ENTRY ADDRESS: 0000000000100608
> > > xc: info: PHYSICAL MEMORY ALLOCATION:
> > >   4KB PAGES: 0x0000000000000200
> > >   2MB PAGES: 0x00000000000001fb
> > >   1GB PAGES: 0x0000000000000000
> > > xc: detail: elf_load_binary: phdr 0 at 0x7f022c779000 -> 0x7f022c81682d
> > > libxl: debug: libxl_device.c:257:libxl__device_disk_set_backend: Disk
> > > vdev=hda spec.backend=phy
> > > libxl: debug: libxl_event.c:559:libxl__ev_xswatch_register: watch
> > > w=0x2112f48 wpath=/local/domain/0/backend/vbd/2/768/state token=3/0:
> > > register slotnum=3
> > > libxl: debug: libxl_create.c:1243:do_domain_create: ao 0x210c360:
> > > inprogress: poller=0x210c3c0, flags=i
> > > libxl: debug: libxl_event.c:503:watchfd_callback: watch w=0x2112f48
> > > wpath=/local/domain/0/backend/vbd/2/768/state token=3/0: event
> > > epath=/local/domain/0/backend/vbd/2/768/state
> > > libxl: debug: libxl_event.c:647:devstate_watch_callback: backend
> > > /local/domain/0/backend/vbd/2/768/state wanted state 2 still waiting
> > state 1
> > > libxl: debug: libxl_event.c:503:watchfd_callback: watch w=0x2112f48
> > > wpath=/local/domain/0/backend/vbd/2/768/state token=3/0: event
> > > epath=/local/domain/0/backend/vbd/2/768/state
> > > libxl: debug: libxl_event.c:643:devstate_watch_callback: backend
> > > /local/domain/0/backend/vbd/2/768/state wanted state 2 ok
> > > libxl: debug: libxl_event.c:596:libxl__ev_xswatch_deregister: watch
> > > w=0x2112f48 wpath=/local/domain/0/backend/vbd/2/768/state token=3/0:
> > > deregister slotnum=3
> > > libxl: debug: libxl_event.c:608:libxl__ev_xswatch_deregister: watch
> > > w=0x2112f48: deregister unregistered
> > > libxl: debug: libxl_device.c:959:device_hotplug: calling hotplug script:
> > > /etc/xen/scripts/block add
> > > libxl: debug: libxl_dm.c:1206:libxl__spawn_local_dm: Spawning
> > device-model
> > > /usr/bin/qemu-system-i386 with arguments:
> > > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:
> > > /usr/bin/qemu-system-i386
> > > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:   -xen-domid
> > > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:   2
> > > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:   -chardev
> > > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:
> > > socket,id=libxl-cmd,path=/var/run/xen/qmp-libxl-2,server,nowait
> > > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:   -mon
> > > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:
> > > chardev=libxl-cmd,mode=control
> > > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:   -name
> > > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:   ubuntu-hvm-0
> > > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:   -vnc
> > > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:   0.0.0.0:0,to=99
> > > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:   -global
> > > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:   isa-fdc.driveA=
> > > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:   -serial
> > > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:   pty
> > > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:   -vga
> > > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:   cirrus
> > > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:   -global
> > > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:   vga.vram_size_mb=8
> > > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:   -boot
> > > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:   order=c
> > > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:   -smp
> > > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:   2,maxcpus=2
> > > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:   -device
> > > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:
> > > rtl8139,id=nic0,netdev=net0,mac=00:16:3e:23:44:2c
> > > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:   -netdev
> > > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:
> > > type=tap,id=net0,ifname=vif2.0-emu,script=no,downscript=no
> > > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:   -M
> > > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:   xenfv
> > > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:   -m
> > > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:   1016
> > > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:   -drive
> > > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:
> > >
> > file=/dev/ubuntu-vg/ubuntu-hvm-0,if=ide,index=0,media=disk,format=raw,cache=writeback
> > > libxl: debug: libxl_event.c:559:libxl__ev_xswatch_register: watch
> > > w=0x210c960 wpath=/local/domain/0/device-model/2/state token=3/1:
> > register
> > > slotnum=3
> > > libxl: debug: libxl_event.c:503:watchfd_callback: watch w=0x210c960
> > > wpath=/local/domain/0/device-model/2/state token=3/1: event
> > > epath=/local/domain/0/device-model/2/state
> > > libxl: debug: libxl_event.c:503:watchfd_callback: watch w=0x210c960
> > > wpath=/local/domain/0/device-model/2/state token=3/1: event
> > > epath=/local/domain/0/device-model/2/state
> > > libxl: debug: libxl_event.c:596:libxl__ev_xswatch_deregister: watch
> > > w=0x210c960 wpath=/local/domain/0/device-model/2/state token=3/1:
> > > deregister slotnum=3
> > > libxl: debug: libxl_event.c:608:libxl__ev_xswatch_deregister: watch
> > > w=0x210c960: deregister unregistered
> > > libxl: debug: libxl_qmp.c:707:libxl__qmp_initialize: connected to
> > > /var/run/xen/qmp-libxl-2
> > > libxl: debug: libxl_qmp.c:299:qmp_handle_response: message type: qmp
> > > libxl: debug: libxl_qmp.c:555:qmp_send_prepare: next qmp command: '{
> > >     "execute": "qmp_capabilities",
> > >     "id": 1
> > > }
> > > '
> > > libxl: debug: libxl_qmp.c:299:qmp_handle_response: message type: return
> > > libxl: debug: libxl_qmp.c:555:qmp_send_prepare: next qmp command: '{
> > >     "execute": "query-chardev",
> > >     "id": 2
> > > }
> > > '
> > > libxl: debug: libxl_qmp.c:299:qmp_handle_response: message type: return
> > > libxl: debug: libxl_qmp.c:555:qmp_send_prepare: next qmp command: '{
> > >     "execute": "change",
> > >     "id": 3,
> > >     "arguments": {
> > >         "device": "vnc",
> > >         "target": "password",
> > >         "arg": ""
> > >     }
> > > }
> > > '
> > > libxl: debug: libxl_qmp.c:299:qmp_handle_response: message type: return
> > > libxl: debug: libxl_qmp.c:555:qmp_send_prepare: next qmp command: '{
> > >     "execute": "query-vnc",
> > >     "id": 4
> > > }
> > > '
> > > libxl: debug: libxl_qmp.c:299:qmp_handle_response: message type: return
> > > libxl: debug: libxl_event.c:559:libxl__ev_xswatch_register: watch
> > > w=0x210e8a8 wpath=/local/domain/0/backend/vif/2/0/state token=3/2:
> > register
> > > slotnum=3
> > > libxl: debug: libxl_event.c:503:watchfd_callback: watch w=0x210e8a8
> > > wpath=/local/domain/0/backend/vif/2/0/state token=3/2: event
> > > epath=/local/domain/0/backend/vif/2/0/state
> > > libxl: debug: libxl_event.c:647:devstate_watch_callback: backend
> > > /local/domain/0/backend/vif/2/0/state wanted state 2 still waiting state
> > 1
> > > libxl: debug: libxl_event.c:503:watchfd_callback: watch w=0x210e8a8
> > > wpath=/local/domain/0/backend/vif/2/0/state token=3/2: event
> > > epath=/local/domain/0/backend/vif/2/0/state
> > > libxl: debug: libxl_event.c:643:devstate_watch_callback: backend
> > > /local/domain/0/backend/vif/2/0/state wanted state 2 ok
> > > libxl: debug: libxl_event.c:596:libxl__ev_xswatch_deregister: watch
> > > w=0x210e8a8 wpath=/local/domain/0/backend/vif/2/0/state token=3/2:
> > > deregister slotnum=3
> > > libxl: debug: libxl_event.c:608:libxl__ev_xswatch_deregister: watch
> > > w=0x210e8a8: deregister unregistered
> > > libxl: debug: libxl_device.c:959:device_hotplug: calling hotplug script:
> > > /etc/xen/scripts/vif-bridge online
> > > libxl: debug: libxl_device.c:959:device_hotplug: calling hotplug script:
> > > /etc/xen/scripts/vif-bridge add
> > > libxl: debug: libxl_qmp.c:707:libxl__qmp_initialize: connected to
> > > /var/run/xen/qmp-libxl-2
> > > libxl: debug: libxl_qmp.c:299:qmp_handle_response: message type: qmp
> > > libxl: debug: libxl_qmp.c:555:qmp_send_prepare: next qmp command: '{
> > >     "execute": "qmp_capabilities",
> > >     "id": 1
> > > }
> > > '
> > > libxl: debug: libxl_qmp.c:299:qmp_handle_response: message type: return
> > > libxl: debug: libxl_qmp.c:555:qmp_send_prepare: next qmp command: '{
> > >     "execute": "device_add",
> > >     "id": 2,
> > >     "arguments": {
> > >         "driver": "xen-pci-passthrough",
> > >         "id": "pci-pt-03_00.0",
> > >         "hostaddr": "0000:03:00.0"
> > >     }
> > > }
> > > '
> > > libxl: error: libxl_qmp.c:454:qmp_next: Socket read error: Connection
> > reset
> > > by peer
> > > libxl: error: libxl_qmp.c:702:libxl__qmp_initialize: Connection error:
> > > Connection refused
> > > libxl: error: libxl_qmp.c:702:libxl__qmp_initialize: Connection error:
> > > Connection refused
> > > libxl: error: libxl_qmp.c:702:libxl__qmp_initialize: Connection error:
> > > Connection refused
> > > libxl: debug: libxl_pci.c:81:libxl__create_pci_backend: Creating pci
> > backend
> > > libxl: debug: libxl_event.c:1737:libxl__ao_progress_report: ao 0x210c360:
> > > progress report: ignored
> > > libxl: debug: libxl_event.c:1569:libxl__ao_complete: ao 0x210c360:
> > > complete, rc=0
> > > libxl: debug: libxl_event.c:1541:libxl__ao__destroy: ao 0x210c360:
> > destroy
> > > Daemon running with PID 3214
> > > xc: debug: hypercall buffer: total allocations:793 total releases:793
> > > xc: debug: hypercall buffer: current allocations:0 maximum allocations:4
> > > xc: debug: hypercall buffer: cache current size:4
> > > xc: debug: hypercall buffer: cache hits:785 misses:4 toobig:4
> > >
> > > ###########################################################
> > > root@fiat:/var/log/xen# cat qemu-dm-ubuntu-hvm-0.log
> > > char device redirected to /dev/pts/5 (label serial0)
> > > qemu: hardware error: xen: failed to populate ram at 40030000
> > > CPU #0:
> > > EAX=00000000 EBX=00000000 ECX=00000000 EDX=00000633
> > > ESI=00000000 EDI=00000000 EBP=00000000 ESP=00000000
> > > EIP=0000fff0 EFL=00000002 [-------] CPL=0 II=0 A20=1 SMM=0 HLT=1
> > > ES =0000 00000000 0000ffff 00009300
> > > CS =f000 ffff0000 0000ffff 00009b00
> > > SS =0000 00000000 0000ffff 00009300
> > > DS =0000 00000000 0000ffff 00009300
> > > FS =0000 00000000 0000ffff 00009300
> > > GS =0000 00000000 0000ffff 00009300
> > > LDT=0000 00000000 0000ffff 00008200
> > > TR =0000 00000000 0000ffff 00008b00
> > > GDT=     00000000 0000ffff
> > > IDT=     00000000 0000ffff
> > > CR0=60000010 CR2=00000000 CR3=00000000 CR4=00000000
> > > DR0=00000000 DR1=00000000 DR2=00000000 DR3=00000000
> > > DR6=ffff0ff0 DR7=00000400
> > > EFER=0000000000000000
> > > FCW=037f FSW=0000 [ST=0] FTW=00 MXCSR=00001f80
> > > FPR0=0000000000000000 0000 FPR1=0000000000000000 0000
> > > FPR2=0000000000000000 0000 FPR3=0000000000000000 0000
> > > FPR4=0000000000000000 0000 FPR5=0000000000000000 0000
> > > FPR6=0000000000000000 0000 FPR7=0000000000000000 0000
> > > XMM00=00000000000000000000000000000000
> > > XMM01=00000000000000000000000000000000
> > > XMM02=00000000000000000000000000000000
> > > XMM03=00000000000000000000000000000000
> > > XMM04=00000000000000000000000000000000
> > > XMM05=00000000000000000000000000000000
> > > XMM06=00000000000000000000000000000000
> > > XMM07=00000000000000000000000000000000
> > > CPU #1:
> > > EAX=00000000 EBX=00000000 ECX=00000000 EDX=00000633
> > > ESI=00000000 EDI=00000000 EBP=00000000 ESP=00000000
> > > EIP=0000fff0 EFL=00000002 [-------] CPL=0 II=0 A20=1 SMM=0 HLT=1
> > > ES =0000 00000000 0000ffff 00009300
> > > CS =f000 ffff0000 0000ffff 00009b00
> > > SS =0000 00000000 0000ffff 00009300
> > > DS =0000 00000000 0000ffff 00009300
> > > FS =0000 00000000 0000ffff 00009300
> > > GS =0000 00000000 0000ffff 00009300
> > > LDT=0000 00000000 0000ffff 00008200
> > > TR =0000 00000000 0000ffff 00008b00
> > > GDT=     00000000 0000ffff
> > > IDT=     00000000 0000ffff
> > > CR0=60000010 CR2=00000000 CR3=00000000 CR4=00000000
> > > DR0=00000000 DR1=00000000 DR2=00000000 DR3=00000000
> > > DR6=ffff0ff0 DR7=00000400
> > > EFER=0000000000000000
> > > FCW=037f FSW=0000 [ST=0] FTW=00 MXCSR=00001f80
> > > FPR0=0000000000000000 0000 FPR1=0000000000000000 0000
> > > FPR2=0000000000000000 0000 FPR3=0000000000000000 0000
> > > FPR4=0000000000000000 0000 FPR5=0000000000000000 0000
> > > FPR6=0000000000000000 0000 FPR7=0000000000000000 0000
> > > XMM00=00000000000000000000000000000000
> > > XMM01=00000000000000000000000000000000
> > > XMM02=00000000000000000000000000000000
> > > XMM03=00000000000000000000000000000000
> > > XMM04=00000000000000000000000000000000
> > > XMM05=00000000000000000000000000000000
> > > XMM06=00000000000000000000000000000000
> > > XMM07=00000000000000000000000000000000
> > >
> > > ###########################################################
> > > /etc/default/grub
> > > GRUB_DEFAULT="Xen 4.3-amd64"
> > > GRUB_HIDDEN_TIMEOUT=0
> > > GRUB_HIDDEN_TIMEOUT_QUIET=true
> > > GRUB_TIMEOUT=10
> > > GRUB_DISTRIBUTOR=`lsb_release -i -s 2> /dev/null || echo Debian`
> > > GRUB_CMDLINE_LINUX_DEFAULT="quiet splash"
> > > GRUB_CMDLINE_LINUX=""
> > > # biosdevname=0
> > > GRUB_CMDLINE_XEN="dom0_mem=1024M dom0_max_vcpus=1"
> >
> > > _______________________________________________
> > > Xen-devel mailing list
> > > Xen-devel@xxxxxxxxxxxxx
> > > http://lists.xen.org/xen-devel
> >
> >

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.