[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] [PATCH 1/6] tools/pygrub: Set mount propagation to private recursively
This is important in order for every mount done inside a mount namespace to go away after the namespace itself goes away. The comment referring to unreliability in Linux 4.19 was just wrong. This patch sets the story straight and makes the depriv pygrub a bit more confined should a layer of the onion be vulnerable. Signed-off-by: Alejandro Vallejo <alejandro.vallejo@xxxxxxxxx> --- tools/pygrub/src/pygrub | 14 +++++++++----- 1 file changed, 9 insertions(+), 5 deletions(-) diff --git a/tools/pygrub/src/pygrub b/tools/pygrub/src/pygrub index 541e562327..08540ad288 100755 --- a/tools/pygrub/src/pygrub +++ b/tools/pygrub/src/pygrub @@ -55,6 +55,12 @@ def unshare(flags): if unshare(flags) < 0: raise OSError(ctypes.get_errno(), os.strerror(ctypes.get_errno())) + # It's very typical for systemd to mount / with MS_SHARED. That means + # any events in the new namespace get propagated back to the parent. + # + # Undo it so that every mount done in the NS stay confined within it. + subprocess.check_output(["mount", "--make-rprivate", "/"]) + def bind_mount(src, dst, options): open(dst, "a").close() # touch @@ -113,11 +119,9 @@ def depriv(output_directory, output, device, uid, path_kernel, path_ramdisk): if rc != 0 or os.path.getsize(path) == 0: os.unlink(path) - # Normally, unshare(CLONE_NEWNS) will ensure this is not required. - # However, this syscall doesn't exist in *BSD systems and doesn't - # auto-unmount everything on older Linux kernels (At least as of - # Linux 4.19, but it seems fixed in 5.15). Either way, - # recursively unmount everything if needed. Quietly. + # Unshare(CLONE_NEWNS) ensures this is not required, but that's not + # present on *BSD, so recursively unmount everything if needed. + # Quietly. with open('/dev/null', 'w') as devnull: subprocess.call(["umount", "-f", chroot + device_path], stdout=devnull, stderr=devnull) -- 2.34.1
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |