[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH] libxc: lzma build fix



Ian Jackson writes ("Re: [Xen-devel] [PATCH] libxc: lzma build fix"):
> Yes, we could revert this patch and hardcode a value.  32M seems
> plausible.

How about this.

Ian.


libxc: Do not use host physmem as parameter to lzma decoder

It's not clear why a userspace lzma decode would want to use that
particular value, what bearing it has on anything or why it would
assume it could use 1/3 of the total RAM in the system (potentially
quite a large amount of RAM) as opposed to any other limit number.

Instead, hardcode 32Mby.

This reverts 22830:c80960244942, removes the xc_get_physmem/physmem
function entirely, and replaces the expression at the call site with a
fixed constant.

Signed-off-by: Ian Jackson <ian.jackson@xxxxxxxxxxxxx>
Cc: Ian Campbell <Ian.Campbell@xxxxxxxxxxxxx>
Cc: Christoph Egger <Christoph.Egger@xxxxxxx>

diff -r 88cf07fed7d2 tools/libxc/xc_dom_bzimageloader.c
--- a/tools/libxc/xc_dom_bzimageloader.c        Fri Jan 28 18:39:09 2011 +0000
+++ b/tools/libxc/xc_dom_bzimageloader.c        Fri Jan 28 18:49:08 2011 +0000
@@ -152,7 +152,7 @@ static int xc_try_lzma_decode(
     int outsize;
     const char *msg;
 
-    ret = lzma_alone_decoder(&stream, xc_get_physmem() / 3);
+    ret = lzma_alone_decoder(&stream, 32*1024*1024);
     if ( ret != LZMA_OK )
     {
         DOMPRINTF("LZMA: Failed to init stream decoder");
diff -r 88cf07fed7d2 tools/libxc/xc_linux.c
--- a/tools/libxc/xc_linux.c    Fri Jan 28 18:39:09 2011 +0000
+++ b/tools/libxc/xc_linux.c    Fri Jan 28 18:49:08 2011 +0000
@@ -55,27 +55,6 @@ void discard_file_cache(xc_interface *xc
     errno = saved_errno;
 }
 
-uint64_t xc_get_physmem(void)
-{
-    uint64_t ret = 0;
-    const long pagesize = sysconf(_SC_PAGESIZE);
-    const long pages = sysconf(_SC_PHYS_PAGES);
-
-    if ( (pagesize != -1) || (pages != -1) )
-    {
-        /*
-         * According to docs, pagesize * pages can overflow.
-         * Simple case is 32-bit box with 4 GiB or more RAM,
-         * which may report exactly 4 GiB of RAM, and "long"
-         * being 32-bit will overflow. Casting to uint64_t
-         * hopefully avoids overflows in the near future.
-         */
-        ret = (uint64_t)(pagesize) * (uint64_t)(pages);
-    }
-
-    return ret;
-}
-
 /*
  * Local variables:
  * mode: C
diff -r 88cf07fed7d2 tools/libxc/xc_netbsd.c
--- a/tools/libxc/xc_netbsd.c   Fri Jan 28 18:39:09 2011 +0000
+++ b/tools/libxc/xc_netbsd.c   Fri Jan 28 18:49:08 2011 +0000
@@ -23,9 +23,6 @@
 #include <xen/sys/evtchn.h>
 #include <unistd.h>
 #include <fcntl.h>
-#include <stdio.h>
-#include <errno.h>
-#include <sys/sysctl.h>
 
 static xc_osdep_handle netbsd_privcmd_open(xc_interface *xch)
 {
@@ -354,24 +351,6 @@ void discard_file_cache(xc_interface *xc
     errno = saved_errno;
 }
 
-uint64_t xc_get_physmem(void)
-{
-    int mib[2], rc;
-    size_t len;
-    uint64_t physmem;
-
-    mib[0] = CTL_HW;
-    mib[1] = HW_PHYSMEM64;
-    rc = sysctl(mib, 2, &physmem, &len, NULL, 0);
-
-    if (rc == -1) {
-        /* PERROR("%s: Failed to get hw.physmem64: %s\n", strerror(errno)); */
-        return 0;
-    }
-
-    return physmem;
-}
-
 static struct xc_osdep_ops *netbsd_osdep_init(xc_interface *xch, enum 
xc_osdep_type type)
 {
     switch ( type )
diff -r 88cf07fed7d2 tools/libxc/xc_private.h
--- a/tools/libxc/xc_private.h  Fri Jan 28 18:39:09 2011 +0000
+++ b/tools/libxc/xc_private.h  Fri Jan 28 18:49:08 2011 +0000
@@ -275,9 +275,6 @@ void bitmap_byte_to_64(uint64_t *lp, con
 /* Optionally flush file to disk and discard page cache */
 void discard_file_cache(xc_interface *xch, int fd, int flush);
 
-/* How much physical RAM is available? */
-uint64_t xc_get_physmem(void);
-
 #define MAX_MMU_UPDATES 1024
 struct xc_mmu {
     mmu_update_t updates[MAX_MMU_UPDATES];

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.