[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] pv_ops dom0 USB fixed



On Thu, Dec 11, 2008 at 05:10:24PM +0000, Andrew Lyon wrote:
> On Wed, Dec 10, 2008 at 9:03 PM, Andrew Lyon <andrew.lyon@xxxxxxxxx> wrote:
> > On Wed, Dec 10, 2008 at 8:48 PM, Jeremy Fitzhardinge <jeremy@xxxxxxxx> 
> > wrote:
> >> Pasi Kärkkäinen wrote:
> >>>
> >>> On Wed, Dec 10, 2008 at 12:05:32PM -0800, Jeremy Fitzhardinge wrote:
> >>>
> >>>>
> >>>> Ian Campbell noticed a missing TLB flush which was causing the USB
> >>>> crashes/failures when booting the pvops dom0 kernel.  With that fixed,
> >>>> enabling USB boots reliably and seems to work.
> >>>>
> >>>>
> >>>
> >>> Nice!
> >>>
> >>>
> >>>>
> >>>> Its quite possible this will also improve matters with ATA/SATA
> >>>> controllers, though I haven't tested it so far.
> >>>>
> >>>> Anyway, its a significant fix and its worth trying the current pvops
> >>>> patch queue again.  Please tell me what you find.
> >>>>
> >>>>
> >
> > Excellent news, I've tried the pv_ops dom0 kernel several times
> > recently and had failure with usb and sata drivers, so fingers crossed
> > it will work now.
> >
> > I will test tomorrow and report my results ;-)
> >
> > Andy
> >
> >>>
> >>> Hmm.. against what kernel/tree are these patches?
> >>>
> >>
> >> See the wiki ;)
> >>
> >>
> >> Pull the kernel.org/hg/linux-2.6 tree, "hg update $(cat KERNEL_VERSION)",
> >> then "hg qpush -a"
> >>
> >>   J
> >>
> >> _______________________________________________
> >> Xen-devel mailing list
> >> Xen-devel@xxxxxxxxxxxxxxxxxxx
> >> http://lists.xensource.com/xen-devel
> >>
> >
> 
> I downloaded the source a few mins ago and tried pv_ops dom0 on my
> test system a dell optiplex 755, it doesnt get very far at all into
> the boot process:
> 
> (XEN) *** LOADING DOMAIN 0 ***
> (XEN)  Xen  kernel: 64-bit, lsb, compat32
> (XEN)  Dom0 kernel: 64-bit, PAE, lsb, paddr 0x200000 -> 0x8a0418
> (XEN) PHYSICAL MEMORY ARRANGEMENT:
> (XEN)  Dom0 alloc.:   0000000120000000->0000000122000000 (951655 pages
> to be allocated)
> (XEN) VIRTUAL MEMORY ARRANGEMENT:
> (XEN)  Loaded kernel: ffffffff80200000->ffffffff808a0418
> (XEN)  Init. ramdisk: ffffffff808a1000->ffffffff808a1000
> (XEN)  Phys-Mach map: ffffffff808a1000->ffffffff80ff3b38
> (XEN)  Start info:    ffffffff80ff4000->ffffffff80ff44a4
> (XEN)  Page tables:   ffffffff80ff5000->ffffffff81002000
> (XEN)  Boot stack:    ffffffff81002000->ffffffff81003000
> (XEN)  TOTAL:         ffffffff80000000->ffffffff81400000
> (XEN)  ENTRY ADDRESS: ffffffff80765200
> (XEN) Dom0 has maximum 2 VCPUs
> (XEN) Scrubbing Free RAM: .done.
> (XEN) Xen trace buffers: disabled
> (XEN) Std. Loglevel: Errors and warnings
> (XEN) Guest Loglevel: Nothing (Rate-limited: Errors and warnings)
> (XEN) Xen is relinquishing VGA console.
> (XEN) *** Serial input -> DOM0 (type 'CTRL-a' three times to switch
> input to Xen)
> (XEN) Freed 108kB init memory.
> mapping kernel into physical memory
> Xen: setup ISA identity maps
> about to get started...
> (XEN) ioapic_guest_write: apic=0, pin=2, old_irq=0, new_irq=-1
> (XEN) ioapic_guest_write: old_entry=000009f0, new_entry=00010900
> (XEN) ioapic_guest_write: Attempt to remove IO-APIC pin of in-use IRQ!
> (XEN) ioapic_guest_write: apic=0, pin=4, old_irq=4, new_irq=-1
> (XEN) ioapic_guest_write: old_entry=000009f1, new_entry=00010900
> (XEN) ioapic_guest_write: Attempt to remove IO-APIC pin of in-use IRQ!
> (XEN) ioapic_guest_write: apic=0, pin=4, old_irq=4, new_irq=4
> (XEN) ioapic_guest_write: old_entry=000009f1, new_entry=000189f1
> (XEN) ioapic_guest_write: Attempt to modify IO-APIC pin for in-use IRQ!
> 
> And then hangs.
> 
> I tried adding pci=nomsi to the kernel arguments, didnt seem to make
> much difference.
> 

Did you try specifying both pci=nomsi and nosmp ? 

-- Pasi

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.