[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [win-pv-devel] [PATCH 0/5] Add new IOCTLs to XENIFACE



> -----Original Message-----
> From: RafaÅ WojdyÅa [mailto:omeg@xxxxxxxxxxxxxxxxxxxxxx]
> Sent: 08 October 2015 14:33
> To: Paul Durrant; win-pv-devel@xxxxxxxxxxxxxxxxxxxx
> Subject: Re: [win-pv-devel] [PATCH 0/5] Add new IOCTLs to XENIFACE
> 
> On 2015-10-08 11:00, Paul Durrant wrote:
> >> -----Original Message-----
> >> From: win-pv-devel-bounces@xxxxxxxxxxxxxxxxxxxx [mailto:win-pv-devel-
> >> bounces@xxxxxxxxxxxxxxxxxxxx] On Behalf Of Rafal Wojdyla
> >> Sent: 07 October 2015 05:48
> >> To: win-pv-devel@xxxxxxxxxxxxxxxxxxxx
> >> Subject: [win-pv-devel] [PATCH 0/5] Add new IOCTLs to XENIFACE
> >>
> >> This patch series implements new IOCTLs needed for libvchan. User mode
> >> code will come next. Some notes:
> >>
> >
> > Cool. Thanks for posting... I'll review as soon as I can.
> >
> >> - All new IOCTLs have their input and output data format defined as
> >> structs. I kept them separate for all IOCTLs even if the contents is the
> >> same. Not sure if that's needed but I feel it's a good idea -- if one
> >> IOCTL changes its data format, others won't be affected.
> >>
> >
> > I think that's a good decision. It's the model I tend to prefer.
> >
> >> - Grant/map IOCTLs are pended forever, a separate IOCTL is used to get
> >> their actual result. Unmapping memory from user mode needs to be
> done
> >> below DISPATCH_LEVEL and in context of the same process, but cancel
> >> routines are not guaranteed to be either. Tests showed that they were
> >> always called at APC_LEVEL and in the correct context, but to be safe I
> >> queue a work item to execute at PASSIVE and use KeStackAttachProcess
> to
> >> change the address space.
> >>
> >
> > Can you not directly KeStackAttachProcess() in the cancellation routine?
> 
> MSDN says that KeStackAttachProcess() needs to be called below DISPATCH
> (understandable since it messes with user mode). I wonder though if I'm
> wrong thinking that my CsqCompleteCanceledIrp() function is not
> guaranteed to be below DISPATCH. MSDN doesn't explicitly specify the
> IRQL but says this about cancel-safe IRP queues (what I'm using):
> 
> The driver is insulated from all IRP cancellation handling. The system
> provides a Cancel routine for IRPs in the queue. This routine calls
> CsqRemoveIrp to remove the IRP from the queue, and
> CsqCompleteCanceledIrp to complete the IRP cancellation.
> 
> Then description for the generic Cancel routine says:
> 
> The Cancel routine executes in an arbitrary thread context at IRQL =
> DISPATCH_LEVEL until it calls IoReleaseCancelSpinLock, which changes the
> IRQL to a caller-supplied value.
> 
> ...well, it's still "caller-supplied" so I don't think we can safely
> assume my CsqCompleteCanceledIrp() will always be below
> DISPATCH_LEVEL.
> 

Damn, for some reason I thought cancelations came in at APC_LEVEL. Looks like 
work items or your own completion thread (blocked on irp cancelation) are 
probably the only way to go then.

  Paul

> >
> >> - Is there a particular reason for header files not being included in VS
> >> projects? That would help with navigating the code.
> >>
> >
> > No, none at all. I don't use Visual Studio for editing so if adding the 
> > header
> files into the projects would help please feel free.
> >
> >> - I only updated the VS2013 project, don't have VS2012 at hand but it's
> >> probably easy to do by hand.
> >
> > Yes, the translation is not generally hard to do, plus the vcxproj files 
> > that VS
> writes are usually so bad they need to be hand edited afterwards anyway.
> >
> >   Paul
> >
> >>
> >> - Feel free to change names/formatting if/where needed :)
> >>
> 
> --
> RafaÅ WojdyÅa
> Qubes Tools for Windows developer
> https://www.qubes-os.org/
_______________________________________________
win-pv-devel mailing list
win-pv-devel@xxxxxxxxxxxxxxxxxxxx
http://lists.xenproject.org/cgi-bin/mailman/listinfo/win-pv-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.