[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] [Xen-devel] [PATCH v2 2/2] libxl: fix build (missing CLONE_NEWIPC) on astonishingly old systems
CLONE_NEWIPC was introduced in Linux 2.6.19, on the 29th of November 2006, which was 12 years, 1 month, and 14 days ago. Nevertheless apparently some people are trying to build Xen on systems whose kernel headers are that old. Placate these people by providing a fallback #define for CLONE_NEWIPC. The actual binary value will of course remain constant, because of the kernel API promise, so this is and will be correct on all platforms where the CLONE_NEWIPC is supported. (Even if for some reason we miss the right #includes.) Of course at runtime this value will not work on older kernels. It will be rejected as unknown. However on those kernels we do not want to support dm_restrict, and an attempt to use it will fail. It is OK for the failure to be a messy EINVAL syscall failure. (The IPC namespace unshare is necessary to avoid a suborned deprivileged qemu from causing trouble with shm, sem, etc.) CC: Wei Liu <wei.liu2@xxxxxxxxxx> CC: Juergen Gross <jgross@xxxxxxxx> CC: Jan Beulich <JBeulich@xxxxxxxx> Signed-off-by: Ian Jackson <ian.jackson@xxxxxxxxxxxxx> --- v2: Get rid of spurious X --- tools/libxl/libxl_linux.c | 8 ++++++++ 1 file changed, 8 insertions(+) diff --git a/tools/libxl/libxl_linux.c b/tools/libxl/libxl_linux.c index 6475cca64b..59dd945bc1 100644 --- a/tools/libxl/libxl_linux.c +++ b/tools/libxl/libxl_linux.c @@ -18,6 +18,14 @@ #include <sys/resource.h> #include "libxl_internal.h" + +/* Workarounds for Linux-specific lacks can go here: */ + +#ifndef CLONE_NEWIPC /* Available as of Linux 2.6.19 / glibc 2.8 */ +# define CLONE_NEWIPC 0x08000000 +#endif + + int libxl__try_phy_backend(mode_t st_mode) { if (S_ISBLK(st_mode) || S_ISREG(st_mode)) { -- 2.11.0 _______________________________________________ Xen-devel mailing list Xen-devel@xxxxxxxxxxxxxxxxxxxx https://lists.xenproject.org/mailman/listinfo/xen-devel
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |