[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH 3/8] xen: defer call to xen_restrict until just before os_setup_post



> -----Original Message-----
> From: Xen-devel [mailto:xen-devel-bounces@xxxxxxxxxxxxx] On Behalf Of
> Andrew Cooper
> Sent: 13 October 2017 10:00
> To: Ross Lagerwall <ross.lagerwall@xxxxxxxxxx>; Ian Jackson
> <Ian.Jackson@xxxxxxxxxx>; qemu-devel@xxxxxxxxxx
> Cc: Anthony Perard <anthony.perard@xxxxxxxxxx>; Juergen Gross
> <jgross@xxxxxxxx>; Stefano Stabellini <sstabellini@xxxxxxxxxx>; xen-
> devel@xxxxxxxxxxxxxxxxxxxx
> Subject: Re: [Xen-devel] [PATCH 3/8] xen: defer call to xen_restrict until 
> just
> before os_setup_post
> 
> On 13/10/2017 09:37, Ross Lagerwall wrote:
> > On 10/09/2017 05:01 PM, Ian Jackson wrote:
> >> We need to restrict *all* the control fds that qemu opens.  Looking in
> >> /proc/PID/fd shows there are many; their allocation seems scattered
> >> throughout Xen support code in qemu.
> >>
> >> We must postpone the restrict call until roughly the same time as qemu
> >> changes its uid, chroots (if applicable), and so on.
> >>
> >> There doesn't seem to be an appropriate hook already.  The RunState
> >> change hook fires at different times depending on exactly what mode
> >> qemu is operating in.
> >>
> >> And it appears that no-one but the Xen code wants a hook at this phase
> >> of execution.  So, introduce a bare call to a new function
> >> xen_setup_post, just before os_setup_post.  Also provide the
> >> appropriate stub for when Xen compilation is disabled.
> >>
> >> We do the restriction before rather than after os_setup_post, because
> >> xen_restrict may need to open /dev/null, and os_setup_post might have
> >> called chroot.
> >>
> > This works for normally starting a VM but doesn't seem to work when
> > resuming/migrating.
> >
> > Here is the order of selected operations when starting a VM normally:
> >     VNC server running on 127.0.0.1:5901
> >     xen_change_state_handler()
> >     xenstore_record_dm_state: running
> >     xen_setup_post()
> >     xentoolcore_restrict_all: rc = 0
> >     os_setup_post()
> >     main_loop()
> >
> > Here is the order of selected operations when starting QEMU with
> > -incoming fd:... :
> >     VNC server running on 127.0.0.1:5902
> >     migration_fd_incoming()
> >     xen_setup_post()
> >     xentoolcore_restrict_all: rc = 0
> >     os_setup_post()
> >     main_loop()
> >     migration_set_incoming_channel()
> >     migrate_set_state()
> >     xen_change_state_handler()
> >     xenstore_record_dm_state: running
> >     error recording dm state
> >     qemu exited with code 1
> >
> > The issue is that QEMU needs xenstore access to write "running" but
> > this is after it has already been restricted. Moving xen_setup_post()
> > into xen_change_state_handler() works fine. The only issue is that in
> > the migration case, it executes after os_setup_post() so QEMU might be
> > chrooted in which case opening /dev/null to restrict fds doesn't work
> > (unless its new root has a /dev/null).
> >
> 
> Wasn't the agreement in the end to remove all use of xenstore from the
> device mode?  This running notification can and should be QMP, at which
> point we break a causal dependency.
> 

Yes, that was the agreement. One problem is that there is not yet adequate 
separation in either QEMU's common and pv/hvm init routines to do this yet. 
Nor, I believe, is there support in libxl to spawn separate xenpv and xenfv 
instances of QEMU for the same guest.

  Paul

> For safety reasons, qemu needs to have restricted/dropped/etc all
> permissions before it looks at a single byte of incoming migration data,
> to protect against buggy or malicious alterations to the migration stream.
> 
> ~Andrew
> 
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@xxxxxxxxxxxxx
> https://lists.xen.org/xen-devel
_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
https://lists.xen.org/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.