[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: “Backend has not unmapped grant” errors



On Mon, Aug 29, 2022 at 04:39:29PM +0200, Marek Marczykowski-Górecki wrote:
> On Mon, Aug 29, 2022 at 02:55:55PM +0200, Juergen Gross wrote:
> > On 28.08.22 07:15, Demi Marie Obenour wrote:
> > > On Wed, Aug 24, 2022 at 08:11:56AM +0200, Juergen Gross wrote:
> > > > On 24.08.22 02:20, Marek Marczykowski-Górecki wrote:
> > > > > On Tue, Aug 23, 2022 at 09:48:57AM +0200, Juergen Gross wrote:
> > > > > > On 23.08.22 09:40, Demi Marie Obenour wrote:
> > > > > > > I recently had a VM’s /dev/xvdb stop working with a “backend has 
> > > > > > > not
> > > > > > > unmapped grant” error.  Since /dev/xvdb was the VM’s private 
> > > > > > > volume,
> > > > > > > that rendered the VM effectively useless.  I had to kill it with
> > > > > > > qvm-kill.
> > > > > > > 
> > > > > > > The backend of /dev/xvdb is dom0, so a malicious backend is 
> > > > > > > clearly not
> > > > > > > the cause of this.  I believe the actual cause is a race 
> > > > > > > condition, such
> > > > > > > as the following:
> > > > > > > 
> > > > > > > 1. GUI agent in VM allocates grant X.
> > > > > > > 2. GUI agent tells GUI daemon in dom0 to map X.
> > > > > > > 3. GUI agent frees grant X.
> > > > > > > 4. blkfront allocates grant X and passes it to dom0.
> > > > > > > 5. dom0’s blkback maps grant X.
> > > > > > > 6. blkback unmaps grant X.
> > > > > > > 7. GUI daemon maps grant X.
> > > > > > > 8. blkfront tries to revoke access to grant X and fails.  Disaster
> > > > > > >       ensues.
> > > > > > > 
> > > > > > > What could be done to prevent this race?  Right now all of the
> > > > > > > approaches I can think of are horribly backwards-incompatible.  
> > > > > > > They
> > > > > > > require replacing grant IDs with some sort of handle, and 
> > > > > > > requiring
> > > > > > > userspace to pass these handles to ioctls.  It is also possible 
> > > > > > > that
> > > > > > > netfront and blkfront could race against each other in a way that 
> > > > > > > causes
> > > > > > > this, though I suspect that race would be much harder to trigger.
> > > > > > > 
> > > > > > > This has happened more than once so it is not a fluke due to e.g. 
> > > > > > > cosmic
> > > > > > > rays or other random bit-flips.
> > > > > > > 
> > > > > > > Marek, do you have any suggestions?
> > > > > > 
> > > > > > To me that sounds like the interface of the GUI is the culprit.
> > > > > > 
> > > > > > The GUI agent in the guest should only free a grant, if it got a 
> > > > > > message
> > > > > > from the backend that it can do so. Just assuming to be able to 
> > > > > > free it
> > > > > > because it isn't in use currently is the broken assumption here.
> > > > > 
> > > > > FWIW, I hit this issue twice already in this week CI run, while it 
> > > > > never
> > > > > happened before. The difference compared to previous run is Linux
> > > > > 5.15.57 vs 5.15.61. The latter reports persistent grants disabled.
> > > > 
> > > > I think this additional bug is just triggering the race in the GUI
> > > > interface more easily, as blkfront will allocate new grants with a
> > > 
> > > 1. Treat “backend has not unmapped grant” errors as non-fatal.  The most
> > >     likely cause is buggy userspace software, not an attempt to exploit
> > >     XSA-396.  Instead of disabling the device, just log a warning message
> > > > much higher frequency.
> > > > 
> > > > So fixing the persistent grant issue will just paper over the real
> > > > issue.
> > > 
> > > Indeed so, but making the bug happen much less frequently is still a
> > > significant win for users.
> > 
> > Probably, yes.
> > 
> > > In the long term, there is one situation I do not have a good solution
> > > for: recovery from GUI agent crashes.  If the GUI agent crashes, the
> > > kernel it is running under has two bad choices.  Either the kernel can
> > > reclaim the grants, risking them being mapped at a later time by the GUI
> > > daemon, or it can leak them, which is bad for obvious reasons.  I
> > > believe the current implementation makes the former choice.
> > 
> > It does.
> > 
> > I don't have enough information about the GUI architecture you are using.
> > Which components are involved on the backend side, and which on the
> > frontend side? Especially the responsibilities and communication regarding
> > grants is important here.
> 
> I'll limit the description to the relevant minimum here.
> The gui-agent(*) uses gntalloc to share framebuffers (they are allocated
> whenever an application within domU opens a window), then sends grant
> reference numbers over vchan to the gui-daemon (running in dom0 by
> default, but it can be also another domU).
> Then the gui-daemon(*) maps them.
> Later, when an application closes a window, the shared memory is
> unmapped, and gui-daemon is informed about it. Releasing grant refs is
> deferred by the kernel (until gui-daemon unmaps them). It may happen
> that unmapping on the gui-agent side will happen before gui-daemon maps
> them. We are modifying our GUI protocol to delay releasing grants on the
> user space side, to coordinate with gui-daemon (basically wait until
> gui-daemon confirms it unmapped them). This should fix the "normal"
> case.
> But if the gui-agent crashes just after sending grant refs, but before
> gui-daemon maps them, then the problem is still there. If they are
> immediately released by the kernel for others to use, we can hit the
> same issue again (for example blkfront using them, and then gui-daemon
> mapping them). I don't see race-free method for solving this with the
> current API. GUI daemon can notice when such situation happens (by
> checking if gui-agent is still alive after mapping grants), but that is
> too late already.
> 
> The main difference compared to kernel drivers is the automatic release
> on crash (or other unclean exit). In case of kernel driver crash, either
> the whole VM goes down, or at least automatic release doesn't happen.
> Maybe gntalloc could have some flag (per open file? per allocated
> grant?) to _not_ release grant reference (aka leak it) in case of
> implicit unmap, instead of explicit release? Such explicit release
> would need to be added to the Linux gntshr API, as xengntshr_unshare()
> currently is just munmap()). I don't see many other options to avoid
> userspace crash (potentially) taking down PV device with it too...

That is still less than great, as it leads to a memory leak.  Another
approach would be some sort of unmap/revoke operation in the backend, so
that the backend revokes its own access to the grants before telling the
frontend it has unmapped them.  This would cause the userspace mmap()
call to fail.

> (*) gui-agent and gui-daemon here are both in fact two processes (qubes gui
> process that handles vchan communication and Xorg that does the actual
> mapping). It complicates few things, but generally is irrelevant detail
> from the Xen point of view.
-- 
Sincerely,
Demi Marie Obenour (she/her/hers)
Invisible Things Lab

Attachment: signature.asc
Description: PGP signature


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.