[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-devel] What is the current state of Dom0 kernel support?
On Wed, 8 Jul 2009, Jeremy Fitzhardinge wrote: rebase/master is what I'm currently working on. It's work-in-progress, but it works for me at the moment. I'd appreciate any test results you have. (I don't yet have a fix in there for your PAE issue however.) x86_64 works for me. i686 does the following then stops. This is from rebase/master from 30th June. Michael Young __ __ _____ _____ _ _ _ __ _ _ \ \/ /___ _ __ |___ / |___ / / | / / | / _| ___/ / | \ // _ \ '_ \ |_ \ |_ \ | |__| | | | |_ / __| | | / \ __/ | | | ___) | ___) || |__| | |_| _| (__| | | /_/\_\___|_| |_| |____(_)____(_)_| |_|_(_)_| \___|_|_|(XEN) Xen version 3.3.1-11.fc11 (mockbuild@(none)) (gcc version 4.4.0 20090307 (Red Hat 4.4.0-0.23) (GCC) ) Tue Mar 10 08:26:32 EDT 2009 (XEN) Latest ChangeSet: unavailable (XEN) Command line: com1=38400 console=com1 (XEN) Video information: (XEN) VGA is text mode 80x25, font 8x16 (XEN) VBE/DDC methods: V2; EDID transfer time: 1 seconds (XEN) Disc information: (XEN) Found 1 MBR signatures (XEN) Found 1 EDD information structures (XEN) Xen-e820 RAM map: (XEN) 0000000000000000 - 000000000009fc00 (usable) (XEN) 000000000009fc00 - 00000000000a0000 (reserved) (XEN) 00000000000e0000 - 0000000000100000 (reserved) (XEN) 0000000000100000 - 000000001f740000 (usable) (XEN) 000000001f740000 - 000000001f750000 (ACPI data) (XEN) 000000001f750000 - 000000001f800000 (ACPI NVS) (XEN) System RAM: 502MB (514940kB) (XEN) ACPI: RSDP 000F7970, 0014 (r0 ACPIAM)(XEN) ACPI: RSDT 1F740000, 0030 (r1 INTEL D845GRG 20020909 MSFT 97) (XEN) ACPI: FACP 1F740200, 0081 (r2 INTEL D845GRG 20020909 MSFT 97) (XEN) ACPI: DSDT 1F740400, 3F21 (r1 INTEL D845GRG 10A MSFT 100000D) (XEN) ACPI: FACS 1F750000, 0040(XEN) ACPI: APIC 1F740300, 0068 (r1 INTEL D845GRG 20020909 MSFT 97) (XEN) ACPI: ASF! 1F744330, 0084 (r16 AMIASF I845GASF 1 MSFT 100000D) (XEN) Xen heap: 9MB (9732kB) (XEN) Domain heap initialised (XEN) Processor #0 15:2 APIC version 20 (XEN) IOAPIC[0]: apic_id 1, version 32, address 0xfec00000, GSI 0-23 (XEN) Enabling APIC mode: Flat. Using 1 I/O APICs (XEN) Using scheduler: SMP Credit Scheduler (credit) (XEN) Detected 2400.128 MHz processor. (XEN) CPU0: Intel(R) Pentium(R) 4 CPU 2.40GHz stepping 07 (XEN) Total of 1 processors activated. (XEN) ENABLING IO-APIC IRQs (XEN) -> Using new ACK method (XEN) Platform timer is 3.579MHz ACPI PM Timer (XEN) Brought up 1 CPUs (XEN) I/O virtualisation disabled (XEN) *** LOADING DOMAIN 0 *** (XEN) Xen kernel: 32-bit, PAE, lsb (XEN) Dom0 kernel: 32-bit, PAE, lsb, paddr 0x400000 -> 0x134f000 (XEN) PHYSICAL MEMORY ARRANGEMENT:(XEN) Dom0 alloc.: 000000001a000000->000000001c000000 (108521 pages to be allocated) (XEN) VIRTUAL MEMORY ARRANGEMENT: (XEN) Loaded kernel: c0400000->c134f000 (XEN) Init. ramdisk: c134f000->c1ab0000 (XEN) Phys-Mach map: c1ab0000->c1b21fa4 (XEN) Start info: c1b22000->c1b22474 (XEN) Page tables: c1b23000->c1b36000 (XEN) Boot stack: c1b36000->c1b37000 (XEN) TOTAL: c0000000->c1c00000 (XEN) ENTRY ADDRESS: c09e6000 (XEN) Dom0 has maximum 1 VCPUs (XEN) Scrubbing Free RAM: done. (XEN) Xen trace buffers: disabled (XEN) Std. Loglevel: Errors and warnings (XEN) Guest Loglevel: Nothing (Rate-limited: Errors and warnings)(XEN) *** Serial input -> DOM0 (type 'CTRL-a' three times to switch input to Xen) (XEN) Freed 108kB init memory. mapping kernel into physical memory Xen: setup ISA identity maps about to get started... (XEN) ioapic_guest_write: apic=0, pin=2, old_irq=0, new_irq=-1 (XEN) ioapic_guest_write: old_entry=000009f0, new_entry=00010900 (XEN) ioapic_guest_write: Attempt to remove IO-APIC pin of in-use IRQ! (XEN) ioapic_guest_write: apic=0, pin=4, old_irq=4, new_irq=-1 (XEN) ioapic_guest_write: old_entry=000009f1, new_entry=00010900 (XEN) ioapic_guest_write: Attempt to remove IO-APIC pin of in-use IRQ! _______________________________________________ Xen-devel mailing list Xen-devel@xxxxxxxxxxxxxxxxxxx http://lists.xensource.com/xen-devel
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |