[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [win-pv-devel] Porting libvchan to use the Windows PV Drivers


  • To: RafaÅ WojdyÅa <omeg@xxxxxxxxxxxxxxxxxxxxxx>, "win-pv-devel@xxxxxxxxxxxxxxxxxxxx" <win-pv-devel@xxxxxxxxxxxxxxxxxxxx>
  • From: Paul Durrant <Paul.Durrant@xxxxxxxxxx>
  • Date: Wed, 6 May 2015 16:47:19 +0000
  • Accept-language: en-GB, en-US
  • Delivery-date: Wed, 06 May 2015 16:52:40 +0000
  • List-id: Developer list for the Windows PV Drivers subproject <win-pv-devel.lists.xenproject.org>
  • Thread-index: AQHQW28a3XmHyTKwcEWapCOzUcYchJ0XicZggAFueICAABaqUP//9yuAgAARPoCAACCOgIAA7Q1wgAA2aoCAG7mbAIAAYSCggAAMAACAACuggIALpCKAgANKq4CABTk+QIAjyAsAgADfXaA=
  • Thread-topic: [win-pv-devel] Porting libvchan to use the Windows PV Drivers

> -----Original Message-----
> From: RafaÅ WojdyÅa [mailto:omeg@xxxxxxxxxxxxxxxxxxxxxx]
> Sent: 06 May 2015 06:11
> To: Paul Durrant; win-pv-devel@xxxxxxxxxxxxxxxxxxxx
> Subject: Re: [win-pv-devel] Porting libvchan to use the Windows PV Drivers
> 
> -----BEGIN PGP SIGNED MESSAGE-----
> Hash: SHA1
> 
> It turned out that our fork of the GPL PV drivers didn't work on Qubes
> r3, I'm not really sure why. Generally the OS failed to boot because
> Windows saw incorrect partition layout on the disks for some reason. The
> vbd driver didn't report any errors but I saw that the boot disk had 2
> instead of 3 partitions, and a second, totally empty/unitialized disk
> was showing up with 2 partitions. Our fork wasn't really kept up to date
> before and I didn't want to work with the old code, so I decided to use
> the new drivers after all.
> 

Cool.

> I have some question regarding eventual patches to send and how to best
> structure the new code. I have a prototype working that implements event
> channel and grant IOCTLs, including mapping foreign pages.
> 

Sounds good.

> - - Mapping foreign pages requires adding new APIs to xenbus. I assume
> it's best to add them to the existing gnttab interface (in a v2
> interface version). That functionality doesn't really touch guest grant
> tables but it's grouped in one public Xen header so that probably makes
> the most sense. Does such approach require changes to the coinstaller?
> 

Bumping the GNTTAB version and adding in your extra calls is the right thing to 
do. You should not need to make any changes to the co-installer directly - just 
the gnttab_interface header.

> - - All IOCTL handling is implemented in xeniface. Required interfaces
> (evtchn, gnttab) should be subscribed to by the coinstaller but I didn't
> see any code for removing the subscription. Is that automatic on driver
> uninstall?
> 

Yes. Uninstallation of a driver should blow subscriptions away; maybe that's 
missing.

> - - For event channels I just accept an event handle from user mode
> instead of a weird I/O construct the GPL drivers did. Event channel
> callbacks are basically IRQ handlers so that's mildly inconvenient but I
> just fire a DPC and signal the event from there.
> 

Sounds right. I assume you do an ObReferenceObjectByHandle and then KeSetEvent 
the object from your DPC? I don't think there's any alternative to using a DPC 
for this.

> - - For tracking purposes I assume that I can rely on local ports being
> unique (so that the port is an index/key for my internal state list).
> 

Yes, ports are unique per domain but they will be recycled eagerly so you just 
need to make sure you're not holding onto stale state after the channel is 
closed. You can rely on all pending events having been processed before a 
channel number is recycled, but you cannot rely on all pending events having 
been processed before the close operation returns (since there is no good way 
of synchronizing a close on 1 cpu with a pending event on another).

> - - Event channels don't have any security applied to them so in theory
> any process can signal or close any other channel because xeniface
> doesn't track device opens. Should something be done with that, like
> keeping track of the process that opens the specific channel?
> 

Yes, I would say so. You need to track things like grant maps or open event 
channels against the file object so that they can be destroyed if the process 
terminates abnormally anyway.

> - - Granting pages isn't very complicated: allocate some pool memory,
> build a MDL to map physical pages, call PermitForeignAccess, map to user
> space. User gets the address and a reference list.
> 
> - - For mapping foreign pages I allocate address space by
> FdoAllocateIoSpace() and the rest is pretty much the same as with
> granting. User gets just the address and a handle to driver-maintained
> bookkeeping context.
> 

Yep. Sounds right.

> - - If the hypervisor returns an error during unmapping/ungranting that's
> pretty bad news since we can't free such memory (foreign domain still
> has access to it). I just ASSERT that since I assume it's not an issue
> during normal system operation.
> 

Indeed. I don't think there's anything else that can be done apart from 
retrying forever.

> I only tested this on 64-bit Windows 7 so far but it seems to work fine.
> I'll be doing more testing after I have libvchan working on top of the
> new drivers. And to close, a screen shot of my test program sharing
> memory on Qubes R3:
> 
> http://i.imgur.com/xhfDkhl.png
> 

Looks impressive, although I don't have much of a clue what some of the output 
means : -)

BTW, if you want to post RFC patches I'm happy to look them over.

Cheers,

    Paul

> - --
> RafaÅ WojdyÅa
> Qubes Tools for Windows developer
> -----BEGIN PGP SIGNATURE-----
> 
> iQEcBAEBAgAGBQJVSaJ4AAoJEIWi9rB2GrW7fR0H/jyjhlKsf/OoS+AI/QiiuNDK
> Ud66Lj+SpEkkMcLVi8I6zIZzCTwn1pVeBxuKX1Fo+i1OHOEP6WttD1GRpdMUkL
> Ar
> oLhZD5jSMJaUflzSsYJDzH9iG5Kz4D9JZ8bZgml6TiY84YzqM1n2dOuc2tcgxI67
> O4H+4ZjebhwQV8WpXUoSYP0euDeFRkSKi6zoj53rLZ26ZQVLVR8emeILHQQjr
> U49
> yKwFkLmsMq44OroAtqMLQvVFMdWHmVwsducBauNLPK9IDgCDQtdumSDs
> uUfXtM0D
> tWexuHhSi9UAjE+mXClcDEq0pk+hFIiPTAlVpwWefJFfc4PjiCJO1aGSo5sKPVc=
> =nYgX
> -----END PGP SIGNATURE-----
_______________________________________________
win-pv-devel mailing list
win-pv-devel@xxxxxxxxxxxxxxxxxxxx
http://lists.xenproject.org/cgi-bin/mailman/listinfo/win-pv-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.