[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [win-pv-devel] Porting libvchan to use the Windows PV Drivers



> -----Original Message-----
> From: win-pv-devel-bounces@xxxxxxxxxxxxxxxxxxxx [mailto:win-pv-devel-
> bounces@xxxxxxxxxxxxxxxxxxxx] On Behalf Of Rafal Wojdyla
> Sent: 12 March 2015 20:05
> To: Paul Durrant; win-pv-devel@xxxxxxxxxxxxxxxxxxxx
> Cc: Marek Marczykowski-GÃrecki
> Subject: Re: [win-pv-devel] Porting libvchan to use the Windows PV Drivers
> 
> -----BEGIN PGP SIGNED MESSAGE-----
> Hash: SHA1
> 
> On 12.03.2015 18:09, Paul Durrant wrote:
> >> -----Original Message----- From:
> >> win-pv-devel-bounces@xxxxxxxxxxxxxxxxxxxx [mailto:win-pv-devel-
> >> bounces@xxxxxxxxxxxxxxxxxxxx] On Behalf Of Rafal Wojdyla Sent: 12
> >> March 2015 17:06 To: Paul Durrant;
> >> win-pv-devel@xxxxxxxxxxxxxxxxxxxx Cc: Marek Marczykowski-GÃrecki
> >> Subject: Re: [win-pv-devel] Porting libvchan to use the Windows
> >> PV Drivers
> >>
> > On 2015-03-12 17:45, Paul Durrant wrote:
> >>>>> -----Original Message----- From: RafaÅ WojdyÅa
> >>>>> [mailto:omeg@xxxxxxxxxxxxxxxxxxxxxx] Sent: 12 March 2015
> >>>>> 16:17 To: Paul Durrant; win-pv-devel@xxxxxxxxxxxxxxxxxxxx
> >>>>> Subject: Re: [win-pv-devel] Porting libvchan to use the
> >>>>> Windows PV Drivers
> >>>>>
> >>>> On 2015-03-11 18:46, Paul Durrant wrote:
> >>>>>>>> -----Original Message----- From:
> >>>>>>>> win-pv-devel-bounces@xxxxxxxxxxxxxxxxxxxx
> >>>>>>>> [mailto:win-pv-devel- bounces@xxxxxxxxxxxxxxxxxxxx]
> >>>>>>>> On Behalf Of Rafal Wojdyla Sent: 10 March 2015 20:16
> >>>>>>>> To: win-pv-devel@xxxxxxxxxxxxxxxxxxxx Subject: Re:
> >>>>>>>> [win-pv-devel] Porting libvchan to use the Windows
> >>>>>>>> PV Drivers
> >>>>>>>>
> >>>>>>> Hi,
> >>>>>>>
> >>>>>>>
> >>>>>>>> Hi,
> >>>>>>>
> >>>>>>> I'm unable to properly reply to the thread since I
> >>>>>>> just subscribed to this list but I figured it's worth
> >>>>>>> chiming in (last message is here:
> >>>>>>> http://lists.xenproject.org/archives/html/win-pv-devel/2015-
> 01/msg0
> >
> >>>>>>>
> 006
> >>>>
> >>>>>>>
> > 0.
> >>>>>>>
> >>>>>>>
> >>>> html)
> >>>>>>>
> >>>>>>>
> >>>>>>>> Yes, I understand; I just saw your subscription
> >>>>>>>> message :-)
> >>>>>>>
> >>>>>>> First, some background about me. I'm currently the main
> >>>>>>> and pretty much the only developer/maintainer of guest
> >>>>>>> tools for Windows for Qubes OS
> >>>>>>> (https://wiki.qubes-os.org/). Some of you may have
> >>>>>>> heard of Qubes -- in short, it's an attempt at creating
> >>>>>>> a secure OS based on lightweight AppVMs, currently
> >>>>>>> using Linux/Xen as base. It supports Windows HVMs and
> >>>>>>> our guest tools provide integration with dom0/other
> >>>>>>> domUs (services like data transfer, remote execution,
> >>>>>>> seamless GUI experience etc).
> >>>>>>>
> >>>>>>>
> >>>>>>>> Cool.
> >>>>>>>
> >>>>>>> We're in the process of finalizing the next major
> >>>>>>> release (r3) of Qubes, it will use Xen 4.4 instead of
> >>>>>>> r2's Xen 4.1. As for our Windows tools, they are
> >>>>>>> (currently) using PV drivers based on James Harper's
> >>>>>>> cod e.
> >>>>>>>
> >>>>>>> Our inter-VM communication protocol uses vchan (in
> >>>>>>> fact, vchan originates from our patch accepted into
> >>>>>>> Xen's source a few years ago). In Qubes r2 we have a
> >>>>>>> Windows libvchan implementation, but as stated above,
> >>>>>>> it uses old PV drivers interfaces. You can find it
> >>>>>>> here: https://github.com/QubesOS/qubes-core-vchan-xen
> >>>>>>>
> >>>>>>> That implementation has one big flaw: client side
> >>>>>>> vchan functions are not implemented. It didn't matter
> >>>>>>> for Qubes r2, where all vchan communication is passing
> >>>>>>> through dom0 anyway. In Qubes r3 however, we need that
> >>>>>>> working because of redesigned inter-VM communication
> >>>>>>> protocol that allows direct VM-VM communication after
> >>>>>>> dom0 arbitration.
> >>>>>>>
> >>>>>>> Unfortunately Harper's drivers don't seem to implement
> >>>>>>> the needed kernel interfaces for that as well.
> >>>>>>>
> >>>>>>>> I assume you mean grant mapping? Or maybe just grant
> >>>>>>>> copy, since that would be safer?
> >>>>>>>
> >>>>>>> I didn't need to look into PV drivers sources before,
> >>>>>>> but it seems I will need to do that now :) I found the
> >>>>>>> new PV drivers and this mailing list, found the thread
> >>>>>>> about vchan implementation... and that's pretty much it
> >>>>>>> for now.
> >>>>>>>
> >>>>>>> As I said, I don't have much experience in Xen APIs
> >>>>>>> (didn't need to tinker with them directly before). I
> >>>>>>> do, however, have extensive WinAPI knowledge and
> >>>>>>> moderate amount of Windows driver development
> >>>>>>> experience (part of our guest tools is a custom display
> >>>>>>> driver that allows no-copy video memory sharing with
> >>>>>>> dom0). I managed to build the new drivers and will test
> >>>>>>> them on our dev Qubes build soon.
> >>>>>>>
> >>>>>>> So, to summarize, I'm very interested in developing a
> >>>>>>> Windows vchan implementation on top of the new PV
> >>>>>>> drivers. I'll be reading through the driver sources for
> >>>>>>> a bit still to familiarize myself with the environment.
> >>>>>>> If anyone managed to get something working, or just has
> >>>>>>> ideas, let me know.
> >>>>>>>
> >>>>>>>
> >>>>>>>> If you want to look at adding the necessary code to
> >>>>>>>> the XENBUS_GNTTAB interface to do grant map/copy then
> >>>>>>>> I don't imagine it will be too hard. Adding support
> >>>>>>>> for copy would be easiest but it would also be
> >>>>>>>> possible to grant map pages into the platform PCI
> >>>>>>>> device's BAR (which is where the shared info page and
> >>>>>>>> the grant table itself live).
> >>>>>>>
> >>>>>>>> Let me know if have any specific questions or need
> >>>>>>>> some help getting the drivers going in your
> >>>>>>>> environment.
> >>>>>>>
> >>>> I've tested the drivers on a Win7 pro x64 HVM on Qubes r2 (r3
> >>>> is still a bit unstable). Xenbus and xeniface both install
> >>>> fine. Xenvbd installs OK but the OS BSODs on reboot with code
> >>>> 7B (inaccessible boot device). I'll try to pinpoint the exact
> >>>> failure spot once I setup the pvdrivers sources inside my
> >>>> development VM.
> >>>>
> >>>>
> >>>>> 0x7B can occur in many circumstances. The drivers do log
> >>>>> quite a bit of info, particularly in checked builds, so
> >>>>> there'll probably be something there to indicate the exact
> >>>>> nature of the failure. The main informational logging
> >>>>> (which is the same for free or checked builds) is written
> >>>>> to the qemu logging port (0x12) and debug logging (checked
> >>>>> build only) goes to the Xen port (0xE9). If you watch
> >>>>> wherever you have those redirected then you may be able to
> >>>>> spot the problem. If you can't then post them to the list
> >>>>> and I'll take a look.
> >>>>
> >>>> Do the drivers have specific requirements for backend
> >>>> (Xen/Qemu version)? We're not really using Qemu in dom0, only
> >>>> in minimal stubdoms for HVMs, so that may be a problem.
> >>>>
> >>>>
> >>>>> That's not usually a problem. Do you have PV backends for
> >>>>> disk and net set up though? The fact that you got a 0x7B
> >>>>> after installing xenvbd may simply mean that your toolstack
> >>>>> has just not set up a PV backend.
> > We do have backends set up (xen-blkback for vbd). I'll check in
> > dom0 whether it crashes after the device gets attached or before.
> >
> >
> >> Ok. I must admit that I tend to use qdisk as a backend in most of
> >> my testing, but blkback should be fine. I'll sanity check it
> >> myself when I get time though.
> >
> Sometimes the VM BSODs with 0x7E. I managed to connect WinDbg to it
> and grab some logs (in the attachment). At a glance it seems like a
> lot of event channel failures...
> 
> XENVBD|PdoReset:ASSERTION FAILED: (((NTSTATUS)(Status)) >= 0)
> Assertion
> f:\qubes-builder\qubes-src\xen-pv\xenvbd\src\xenvbd\pdo.c(2297):
> (((NTSTATUS)(Status)) >= 0)
> 

Yes, that certainly sounds like it could be the cause. What version of Xen are 
you running on? The latest XENBUS supports FIFO event channels and per-cpu 
upcalls (which is patch that went into Xen post-4.5) but should fall back to 
2-level and standard callback via if those features are not in the hypervisor. 
There was a bug in 2-level event handling in XENBUS which I fixed with:

commit f321e204a081f9c4dcc732e71283a401751a241b
Author: Paul Durrant <paul.durrant@xxxxxxxxxx>
Date:   Fri Feb 27 13:48:46 2015 +0000

    Fix event channel unmasking for two-level ABI

    The two-level ABI requires that an event is masked for the unmask
    hypercall to raise the event, so the test-and-clear operation in the
    guest basically means that pending events get stuck. The simple fix
    is to re-mask pending events before making the hypercall. This is
    unnecessary when the FIFO ABI is used, but it's safe. Hence this patch
    unconditionally re-masks pending events, regardless of ABI, before
    making the unmask hypercall.

    Signed-off-by: Paul Durrant <paul.durrant@xxxxxxxxxx>

If you are on a Xen that's old enough not to have the FIFO ABI you'll 
definitely need that fix.

  Paul

> 
> > Also adding Marek to the conversation, he's one of our core
> > architects and knows more about backend stuff than I do :)
> >
> >
> >> Cool.
> >
> >> Paul
> >
> >>>>
> >>>>> Paul
> 
> - --
> RafaÅ WojdyÅa
> Qubes Tools for Windows developer
> -----BEGIN PGP SIGNATURE-----
> Version: GnuPG v1
> 
> iQEcBAEBAgAGBQJVAfFGAAoJEIWi9rB2GrW7+csH/2CpP5yT9RksqKmD68CRu
> GDj
> nWgumSt/o9NpuFFjXeY446YPDO7/SW5RcmpuZ2rybN2WCMMFB+i3mq+XQF
> GhkOWK
> xx+sVhc7m4n0f8rOndWZ9l3d4phfumZ2yXIc5LZfbdBdLXRiZSlMDczH+/JCG6p1
> 9tsiq08DQ+5mzxYTJ4X1GFQ0VsBrCrMoLXUIgERF5iVpZn/vtAnalskd5htWnTn
> W
> gK6x69MqPNmLy5P9CXN7KiZFHJE6TVFY4Fj0OJL1QY23AftL2PFoaorHpuyeliGx
> D0EeNmaVpeA4Q++S2neauiq5ZIrQ4GZg+3TPqDfVJTK+SnlzDjjLc7jbi1ttQ6Y=
> =bdEh
> -----END PGP SIGNATURE-----
_______________________________________________
win-pv-devel mailing list
win-pv-devel@xxxxxxxxxxxxxxxxxxxx
http://lists.xenproject.org/cgi-bin/mailman/listinfo/win-pv-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.