[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [win-pv-devel] [PATCH 0/3] Add XENBUS APIs necessary for libvchan implementation



> -----Original Message-----
> From: Rafał Wojdyła [mailto:omeg@xxxxxxxxxxxxxxxxxxxxxx]
> Sent: 11 September 2015 17:46
> To: Paul Durrant; win-pv-devel@xxxxxxxxxxxxxxxxxxxx
> Subject: Re: [win-pv-devel] [PATCH 0/3] Add XENBUS APIs necessary for
> libvchan implementation
> 
> On 2015-09-11 17:46, Paul Durrant wrote:
> >> -----Original Message-----
> >> From: win-pv-devel-bounces@xxxxxxxxxxxxxxxxxxxx [mailto:win-pv-devel-
> >> bounces@xxxxxxxxxxxxxxxxxxxx] On Behalf Of Rafal Wojdyla
> >> Sent: 11 September 2015 13:30
> >> To: win-pv-devel@xxxxxxxxxxxxxxxxxxxx
> >> Subject: [win-pv-devel] [PATCH 0/3] Add XENBUS APIs necessary for
> libvchan
> >> implementation
> >>
> >> This patch series adds some missing XENBUS APIs that are needed for full
> >> libvchan implementation on Windows, namely mapping foreign memory
> >> pages
> >> and setting XenStore key permissions.
> >>
> >
> > All patches applied. Thanks!
> >
> >   Paul
> 
> No problem. Xeniface patches will come next and I have some questions on
> how to best structure the code, especially the usermode DLL and
> libvchan. For now I have the DLL (named xencontrol) in a separate
> directory in the xeniface project, along with a small "client/server"
> test program that exercises most of its exported functions. Our libvchan
> port is kept out of the xeniface project as it doesn't really belong in
> there in my opinion.
> 

I think it's ok for libraries to live in the xeniface package, that's what it's 
for, so your xencontrol DLL sounds fine and the bundled test program is an 
excellent idea. I have no problem with you adding a vchan DLL too; I think the 
package should be the general repository for all user available APIs.

> The DLL is basically a wrapper for most of the xeniface IOCTLs. For
> logging I'm using a simple callback that the caller can register, with
> several verbosity levels.

Sounds ok. I've never managed to figure out a good way of hooking 
OutputDebugString in kernel. Since it uses a standard DbgPrint, rather than 
DbgPrintEx you have to enable the DEFAULT kd filter and that just opens the 
flood-gates.

> 
> In xeniface itself I'm still using the process notification approach for
> cleanup. Didn't have a problem with it thus far, but I can change that
> to pend IOCTLs and add secondary IOCTLs to get actual data back from the
> driver.
> 

I'd prefer irp cancelation as a mechanism; I think it's less likely that 
Microsoft would screw around with that and break it (as they have with other 
APIs). Pended IOCTLs for mapped grants should work, I think, and things like 
event channels can be cleaned up with the file object is closed.

> I've also had some problems with the EVTCHN DPC in xeniface. Namely, it
> seems that the DPC routine can run even after KeRemoveQueueDpc() which
> I
> was doing on cleanup.

That's correct. Dpcs are removed from the queue before execution so 
KeRemoveQueueDpc() on one CPU can race with Dpc execution on another. If you 
affinitize the Dpc and do the removal on the same cpu, at dispatch level, than 
you should be able to avoid that race... or you just use your own 
synchronization mechanism within the Dpc (like a flag to tell it whether to do 
anything or not).

> I'm now calling KeFlushQueuedDpcs() in the cleanup
> function to ensure that the DPC can't run after I free my data. Is this
> the right approach?

That can be tricky. KeFlushQueuedDpcs() has to run at PASSIVE_LEVEL which I 
find is generally too restrictive to be useful. I only tend to use that call at 
the very end of day before freeing any control structures that may be used by a 
Dpc.

  Cheers,

    Paul

> 
> --
> Rafał Wojdyła
> Qubes Tools for Windows developer
> https://www.qubes-os.org/

_______________________________________________
win-pv-devel mailing list
win-pv-devel@xxxxxxxxxxxxxxxxxxxx
http://lists.xenproject.org/cgi-bin/mailman/listinfo/win-pv-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.