[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [win-pv-devel] xenvbd (8.x) - blkback/tapdisk3 problems
> -----Original Message----- > From: Martin Cerveny [mailto:martin@xxxxxxxxx] > Sent: 30 October 2016 12:18 > To: Paul Durrant <Paul.Durrant@xxxxxxxxxx> > Cc: win-pv-devel@xxxxxxxxxxxxxxxxxxxx > Subject: RE: [win-pv-devel] xenvbd (8.x) - blkback/tapdisk3 problems > > Hello. > > Thanks, for response. > > On Fri, 28 Oct 2016, Paul Durrant wrote: > >> -----Original Message----- > >> I have problems with xenvbd (8.x). There was NOT problem with older pv- > >> drivers xenvbd (7.2x). > >> Questions @ bottom. > >> > >> I use remote raw disk as source (multipath+iscsi+iser+ib). > >> Two configs: > >> > >> -------------------------- > >> > >> 1) use direct blkback (format=raw, vdev=hda, access=rw, > >> target=/dev/mapper/3600144f07a0542580000568ba94a0001) > >> > >> Performance is good, but __unusable__ for working. > >> > >> Every few seconds/minutes (randomly, depends on disk load) the > windows > >> hung on io-operations. I usually saw this more often during write > operations. > >> > >> Sometimes (1:10) I saw "PdoReset" in "DebugView" (DomU): > >> > >> 00003034 10:12:32 XENVBD|__PdoReset:Target[0] ====> > >> 00003054 10:12:53 XENVBD|__PdoReset:Target[0] <==== > >> > >> There is also restart log in Dom0, but no errors on disks/iscsi: > >> > >> [ 3919.034421] xen-blkback:backend/vbd/3/768: prepare for reconnect > >> [ 3919.039869] xen-blkback:ring-ref 32, event-channel 40, protocol 1 > (x86_64- > >> abi) > >> > > > > Yes, XENVBD is being asked to reset because Windows thinks the storage > > is stalled and it looks like it was probably right. Suggests a loss of > > event notification somewhere. > > Ok, I suppose Dom0 logs are result of DomU reset, no problem. > > >> Enviroments: > >> - Windows7 x64 > >> - tested signed winpv drivers 8.1 and primary on development drivers 8.2 > >> - xen 4.5.3, 4.6 and primary 4.7.0 > >> - kernels "XenServer" - kernel-3.10.41-353.380450 (and others from XS6.5) > >> and kernel-3.10.96-495.383045.x86_64 (and others from XS7) > >> - blktap3 - blktap-3.0.0.xs1001-xs6.5.0 and blktap-3.2.0.xs1087- > xs7.0.0.x86_64 > >> > >> --------------------- > >> > >> Questions: > >> > >> What is buggy in "direct blkback" chain ? > > No idea. Possibly blkback, possibly the underlying storage. Your kernels are > old and blkback has undergone many changes in more recent kernels. > > (I suppose that XS-kernel is super-tuned for Dom0 and there won't be such > problem). > > Now updated Dom0: > - fedora 24 + update > - kernel - 4.7.9-200.fc24.x86_64 > - xen 4.7.0 + some backports from XS > > The problem is the same. > When were "changes" to blkback applied ? > Clearly a 4.7 kernel is about as up-to-date as you want and I am not aware of any changes in blkback since then so it does tend to suggest the problem is elsewhere. It could well be that XENVBD has some bad interaction with blkback... In my test environments I normally use QEMU qdisk as a backend. Do you see the same issues if you, say, point blkback at a loopback file mount or even an nbd device? > >> Was it tested ? > > By XenServer? No. XenServer makes no use of blkback. > > :-((( (I see few patches to blkback (https://github.com/xenserver/linux- > 3.x.pg/tree/master/master)) > Well I exaggerated... We need it for corner cases (e.g. early HVM guest boot where QEMU is emulating h/w), but none of the 'official' XenServer SRs will use it once PV drivers are up and running in the guest; the frontends talk straight to tapdisk3 in userspace. Paul > Thanks for answers or hints, Martin Cerveny > > > Paul > >> Thanks for answers, Martin Cerveny > >> > >> _______________________________________________ > >> win-pv-devel mailing list > >> win-pv-devel@xxxxxxxxxxxxxxxxxxxx > >> https://lists.xenproject.org/cgi-bin/mailman/listinfo/win-pv-devel > > _______________________________________________ win-pv-devel mailing list win-pv-devel@xxxxxxxxxxxxxxxxxxxx https://lists.xenproject.org/cgi-bin/mailman/listinfo/win-pv-devel
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |