[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: Layer 3 (point-to-point) netfront and netback drivers



On Mon, Sep 19, 2022 at 04:21:27PM -0700, Elliott Mitchell wrote:
> On Mon, Sep 19, 2022 at 05:41:05PM -0400, Demi Marie Obenour wrote:
> > On Mon, Sep 19, 2022 at 01:46:59PM -0700, Elliott Mitchell wrote:
> > > On Sun, Sep 18, 2022 at 08:41:25AM -0400, Demi Marie Obenour wrote:
> > > > How difficult would it be to provide layer 3 (point-to-point) versions
> > > > of the existing netfront and netback drivers?  Ideally, these would
> > > > share almost all of the code with the existing drivers, with the only
> > > > difference being how they are registered with the kernel.  Advantages
> > > > compared to the existing drivers include less attack surface (since the
> > > > peer is no longer network-adjacent), slightly better performance, and no
> > > > need for ARP or NDP traffic.
> > > 
> > > I've actually been wondering about a similar idea.  How about breaking
> > > the entire network stack off and placing /that/ in a separate VM?
> > 
> > This is going to be very hard to do without awesome but difficult
> > changes to applications.  Switching to layer 3 links is a much smaller
> > change that should be transparent to applications.
> 
> Indeed for ones which modify network settings, but not for ones which
> merely use the sockets API.  Isn't this the same issue for what you're
> suggesting?

No.  What I am referring to is having netfront and netback carry IP
packets instead of Ethernet frames.  This is transparent to applications
that use the sockets API.  What you are talking about, if I understand
correctly, requires changing the implementation of the sockets API,
which is much harder.

> > > The other use is network cards which are increasingly able to handle more
> > > of the network stack.  The Linux network team have been resistant to
> > > allowing more offloading, so perhaps it is time to break *everything*
> > > off.
> > 
> > Do you have any particular examples?  The only one I can think of is
> > that Linux is not okay with TCP offload engines.
> 
> That is precisely what I was thinking of.  While I understand the desire
> for control, when it comes down to it a network card which lies could
> simply transparently proxy everything.  Anything not protected by
> cryptography is vulnerable, so worrying about raw packets doesn't seem
> useful.

IIRC the problems with TCP offload engines are that they do not support
all of Linux’s features (such as netfilter), require invasive hooks so
that various configuration can be handled using standard Linux tools,
and have closed-source firmware with substantial remote attack surface.

> > > I'm unsure the benefits would justify the effort, but I keep thinking of
> > > this as the solution to some interesting issues.  Filtering becomes more
> > > interesting, but BPF could work across VMs.
> > 
> > Classic BPF perhaps, but eBPF's attack surface is far too large for this
> > to be viable.  Unprivileged eBPF is already disabled by default.
> 
> I was thinking of classic BPF.  If everything below the sockets layer
> was in a separate VM, filtering rules could still work by pushing BPF
> rules to the other side.
> 
> 
> Your idea is to push less into a separate VM than I was thinking.  I
> wanted to bring up it might be worthwhile pushing more.  If your project
> launches I imagine eventually you'll be trying to encompass more, so it
> may be easier to consider what the future will hold.

I don’t actually plan to go beyond this, although you are of course free
to do so.  This change is simply to reduce attack surface and complexity
in Qubes OS, which uses layer 2 links where layer 3 links would do.  I
am hoping this is just a matter of how the netback and netfront drivers
register with Linux.  I also don’t have the time to implement the change
right now.  My question is about what the change would involve.
-- 
Sincerely,
Demi Marie Obenour (she/her/hers)
Invisible Things Lab

Attachment: signature.asc
Description: PGP signature


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.