[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] Should PV frontend drivers trust the backends?



> -----Original Message-----
> From: Marek Marczykowski-Górecki
> [mailto:marmarek@xxxxxxxxxxxxxxxxxxxxxx]
> Sent: 30 April 2018 18:33
> To: Oleksandr Andrushchenko <andr2000@xxxxxxxxx>
> Cc: Paul Durrant <Paul.Durrant@xxxxxxxxxx>; 'Juergen Gross'
> <jgross@xxxxxxxx>; xen-devel <xen-devel@xxxxxxxxxxxxxxxxxxxx>
> Subject: Re: [Xen-devel] Should PV frontend drivers trust the backends?
> 
> On Thu, Apr 26, 2018 at 11:47:41AM +0300, Oleksandr Andrushchenko wrote:
> > On 04/26/2018 11:16 AM, Paul Durrant wrote:
> > > > -----Original Message-----
> > > > From: Oleksandr Andrushchenko [mailto:andr2000@xxxxxxxxx]
> > > > Sent: 26 April 2018 07:00
> > > > To: Paul Durrant <Paul.Durrant@xxxxxxxxxx>; 'Juergen Gross'
> > > > <jgross@xxxxxxxx>; xen-devel <xen-devel@xxxxxxxxxxxxxxxxxxxx>
> > > > Subject: Re: [Xen-devel] Should PV frontend drivers trust the
> backends?
> > > >
> > > > On 04/25/2018 04:47 PM, Paul Durrant wrote:
> > > > > > -----Original Message-----
> > > > > > From: Xen-devel [mailto:xen-devel-bounces@xxxxxxxxxxxxxxxxxxxx]
> On
> > > > Behalf
> > > > > > Of Juergen Gross
> > > > > > Sent: 25 April 2018 13:43
> > > > > > To: xen-devel <xen-devel@xxxxxxxxxxxxxxxxxxxx>
> > > > > > Subject: [Xen-devel] Should PV frontend drivers trust the
> backends?
> > > > > >
> > > > > > This is a followup of a discussion on IRC:
> > > > > >
> > > > > > The main question of the discussion was: "Should frontend drivers
> > > > > > trust their backends not doing malicious actions?"
> > > > > >
> > > > > > This IMO includes:
> > > > > >
> > > > > > 1. The data put by the backend on the ring page(s) is sane and
> > > > > >      consistent, meaning that e.g. the response producer index is
> always
> > > > > >      ahead of the consumer index.
> > > > > >
> > > > > > 2. Response data won't be modified by the backend after the
> producer
> > > > > >      index has been incremented signaling the response is valid.
> > > > > >
> > > > > > 3. Response data is sane, e.g. an I/O data length is not larger than
> > > > > >      the buffer originally was.
> > > > > >
> > > > > > 4. When a response has been sent all grants belonging to the
> request
> > > > > >      have been unmapped again by the backend, meaning that the
> frontend
> > > > > >      can assume the grants can be removed without conflict.
> > > > > >
> > > > > > Today most frontend drivers (at least in the Linux kernel) seem to
> > > > > > assume all of the above is true (there are some exceptions, but
> never
> > > > > > for all items):
> > > > > >
> > > > > > - they don't check sanity of ring index values
> > > > > > - they don't copy response data into local memory before looking at
> it
> > > > > > - they don't verify returned data length (or do so via BUG_ON())
> > > > > > - they BUG() in case of a conflict when trying to remove a grant
> > > > > >
> > > > > > So the basic question is: should all Linux frontend drivers be
> modified
> > > > > > in order to be able to tolerate buggy or malicious backends? Or is
> the
> > > > > > list of trust above fine?
> > > > > >
> > > > > > IMO even in case the frontends do trust the backends to behave
> sane this
> > > > > > doesn't mean driver domains don't make sense. Driver domains still
> make
> > > > > > a Xen host more robust as they e.g. protect the host against driver
> > > > > > failures normally leading to a crash of dom0.
> > > > > >
> > > > > I see the general question as being analogous to 'should a Linux
> device
> > > > driver trust its hardware' and I think the answer for a general purpose
> OS like
> > > > linux is 'yes'.
> > > > > Now, having worked on fault tolerant systems in a past life, there are
> > > > definitely cases where you want your OS not to implicitly trust its
> peripheral
> > > > hardware and hence special device drivers are used.
> > > > So what do you do if counters provided by the untrusted HW are ok
> > > > and the payload is not?
> > > Well, that depends on whether there is actually any way to verify the
> payload in a driver. Whatever layer in the system is responsible for the data
> needs to verify its integrity in a fault tolerant system. Generally the 
> driver can
> only attempt to verify that it's hardware is working as expect and quiesce it 
> if
> not. For that reason, in the systems I worked on, the driver had the ability 
> to
> control FETs that disconnected peripheral h/w from the PCI bus.
> > >
> > > > > I think the same would apply for virtual machines in situations where
> a
> > > > driver domain is not wholly controlled by a host administrator or is not
> > > > trusted to the same extent as dom0 for other reasons; i.e. they should
> have
> > > > specialist frontends.
> > > > I believe we might be able to express some common strategy for the
> > > > frontends.
> > > > I do understand though that it all needs to be decided on case by case
> > > > basis,
> > > > but common things could still be there, e.g. if prod/cons counters are
> > > > not in sync
> > > > what a frontend needs to do:
> > > >    - should it keep trying to get in sync - might be a bad idea as the
> > > > req/resp data
> > > >      may already become inconsistent (net can probably survive, but not
> > > > block)
> > > >    - should it tear down the connection with the backend - this may
> > > > render in the whole
> > > >      system instability, e.g. imagine you tear down a "/" block device
> > > >    - should it BUG_ON and die
> > > > To me the second option (tear down the connection) seems to be
> > > > more reasonable, although it can still render the guest unusable, but at
> > > > least it
> > > > gives a chance for the guest to recover in a proper way
> > > >
> > > Absolutely that can be done and it's certainly a good idea to be somewhat
> defensive but, as you say, it's quite likely that the PV pair is part of a 
> critical
> subsystem for the guest and so a BUG() may well be the best option to make
> sure that the inevitable guest crash actually contains pertinent information.
> 
> In some cases indeed such device might be critical. But "quite likely"
> IMO isn't good enough to abandon all the other cases and crash the
> domain if any device fails.
> Tearing down misbehaving connection is absolutely reasonable (I do not
> advocate for some complex recovery algorithm), but crashing the domain
> is not.

So what happens if the backend servicing the VM's boot disk fails? Is it better 
to:

a) BUG()/BSOD with some meaningful stack and code such that it's obvious that 
happened, so
b) cover up and wait until something further up the storage stack crashes the 
VM, probably with some error that's just a generic timeout

I'm clearly advocating a) but it's possible b) may be more desirable in some 
scenarios. I think the choice is up to whoever is writing the frontend and 
no-one else should decide their policy for them.

> 
> > >
> > > > And, if my assumption is correct, we still do trust the contents of the
> > > > requests
> > > > and responses, e.g. the payload is still trusted.
> > > Why should the payload be any more trusted than the content of the
> shared ring? They are both shared with the backend and therefore can be
> corrupted to the same extent.
> > This is exactly my point: if we only try to protect from inconsistent
> > prod/cons then
> > this protection is still incomplete as the payload may be the source of
> > failure.
> 
> Well, you can take extra measures, external to the driver, to
> protect against malicious payload (like encryption mentioned by Andrew,
> or dm-verity for block devices). But you can't do the same about the
> driver itself (ring handling etc).
> 

As I said, verification should be down to the layer that has the relevant 
information.

> Of course backend will be able to perform a DoS to some extend in all
> the cases, at least by stopping responding to requests. But keep in mind
> that root fs is not the only device out there. There are also other
> block device, network interfaces etc. And misbehaving backend should
> _not_ be able to take over frontend domain in those cases. And ideally
> also shouldn't also be able to crash it (if device isn't critical for
> domU).
> 

I still think that is the choice of the frontend. Yes, they can be programmed 
defensively but for some usecases it may just not be that important.

> If you want some real world use cases for this, here are two from Qubes
> OS:
> 
> 1. Block devices - base system devices (/, /home equivalent etc) have
> backends in dom0 (*), but there is also an option to use block devices
> exported by other domains. For example the one handling USB controllers.
> So, when you plug USB stick, one domain handle all the USB nasty stuff,
> and export it as a plain device to another domain when user can mount
> LUKS container stored there. Whatever happens there, nothing from that
> USB stick touches dom0 at any time.
> 
> 2. Network devices - there are no network backends in dom0 at all. There
> is one (or more) dedicated domain for handling NICs, then there is
> (possibly a tree of) domain(s) routing the traffic. In some cases a VM
> facing actual network (where the backend runs) is considered less
> trusted than a VM using that network (where the frontend runs).

But, without revocable grants that backend could still DoS the frontend, right?

> 
> BTW Since XSA-155 we do have some additional patches for block and
> network frontend, making similar changes as done to backends at that
> time. I'll resend them in a moment.
> 
> (*) we still have plans to support also untrusted backends for base
> system, with domU verifying all the data it gets (dm-verity, dm-crypt).
> But it isn't there yet.

Maybe the frontend should advised on the trust level of a backend so that it 
can apply auditing should it wish to. If the backend were running in dom0 then 
there would be little point, but a frontend may wish to be more careful when 
e.g. the domain is a trusted driver domain (but with no dm priv). There have 
also been discussions about skipping the use of grants when the backend has 
mapping privilege, for performance reasons, so maybe that could be worked in 
too.

  Paul

> 
> --
> Best Regards,
> Marek Marczykowski-Górecki
> Invisible Things Lab
> A: Because it messes up the order in which people normally read text.
> Q: Why is top-posting such a bad thing?
_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxxx
https://lists.xenproject.org/mailman/listinfo/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.