[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [win-pv-devel] How to diagnose windows domU hang on boot?



> -----Original Message-----
> From: firemeteor.guo@xxxxxxxxx [mailto:firemeteor.guo@xxxxxxxxx] On
> Behalf Of G.R.
> Sent: 06 February 2017 10:15
> To: Paul Durrant <Paul.Durrant@xxxxxxxxxx>
> Cc: win-pv-devel@xxxxxxxxxxxxxxxxxxxx
> Subject: Re: [win-pv-devel] How to diagnose windows domU hang on boot?
> 
> On Mon, Feb 6, 2017 at 4:47 PM, Paul Durrant <Paul.Durrant@xxxxxxxxxx>
> wrote:
> > GNTTAB: MAP XENMAPSPACE_grant_table[31] @ 00000000.f0020000
> > XENBUS|GnttabExpand: added references [00003e00 - 00003fff]
> > XENBUS|RangeSetPop: fail1 (c000009a)
> > XENBUS|GnttabExpand: fail1 (c000009a)
> > XENBUS|GnttabEntryCtor: fail1 (c000009a)
> > XENBUS|CacheCreateObject: fail2
> > XENBUS|CacheCreateObject: fail1 (c000009a)
> >
> > It looks to me like you must have multipage rings for your storage because
> XENVBD has grabbed the entire grant table before XENVIF gets a look-in.
> >
> > ...which means that XENVIF cannot even allocate grant references for the 4
> shared pages it needs to hook up the receive side queues.
> >
> > That was from qemu-ruibox_new.log. As for the other log, I can't see
> anything wrong other than it seems to have stopped once it has attached to
> storage... which probably suggests your backend is not working properly.
> By back-end do you mean the provider of the storage (in my case the
> FreeBSD 10 based FreeNAS domU)?

No, I mean your driver domain, which may or may not contain the actual storage.

> I think FreeBSD 10 is capable of serving as dom0 and providing various
> back-ends.
> The version I'm using is XEN enabled. Will have a double check to see
> if it's configured for dom0 / backend support.
> Just a sanity check -- what should I expect to see if the backend
> support in driver domain domU is not enabled at all?

No idea. All I'm suggesting is keeping your storage datapath simple. Driver 
domains are not in common use. Certainly I've never used one so I'm not au fait 
with debugging that kind of system.

> 
> >
> > So, I suggest you stick with dom0 backends but limit your storage to a 
> > single
> page ring. I can't remember the exact blkback module param you need to set
> to do that, but it shouldn't be hard to find.
> >
> I'm not sure I understand what you mean by 'mutipage rings'. Must
> refer to something internal which is not mentioned in the doc:
> http://xenbits.xen.org/docs/4.8-testing/misc/xl-disk-configuration.txt

No. I'm referring to the blkback feature. See https://lkml.org/lkml/2015/6/3/25

I suggest you blkback's max_ring_page_order parameter to 0, because I think it 
is currently defaulting to 4... which is stupidly large for most use-cases.

> 
> What I'm doing is NOT fancy at all. It's just a plain file based raw
> storage. The config works fine with the gpl pv drivers, BTW.
> 
> The disk config: (the first line is for the experimental domU based config)
> #disk =
> ['backend=nas,/mnt/tank0/DiskImgs/Windows/ruibox/ruibox.img,raw,xvda,
> w']
> #disk = ['file:/mnt/vmfs/Windows/ruibox/ruibox.img,xvda,w']
> 
> Anything wrong with that?

No. I'd stick with the second config and just tune blkback as I suggest.

  Paul
_______________________________________________
win-pv-devel mailing list
win-pv-devel@xxxxxxxxxxxxxxxxxxxx
https://lists.xenproject.org/cgi-bin/mailman/listinfo/win-pv-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.