[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH for-xen-4.5] libxl: Allow copying smaller bitmap into a larger one



On 11/21/2014 06:12 AM, Wei Liu wrote:
On Thu, Nov 20, 2014 at 04:27:34PM -0500, Boris Ostrovsky wrote:
When parsing bitmap objects JSON parser will create libxl_bitmap
map of the smallest size needed.

This can cause problems when saved image file specifies CPU affinity.
For example, if 'vcpu_hard_affinity' in the saved image has only the
first CPU specified, just a single byte will be allocated and
libxl_bitmap->size will be set to 1.

This will result in assertion in libxl_set_vcpuaffinity()->libxl_bitmap_copy()
since the destination bitmap is created for maximum number of CPUs.

We could allocate that bitmap of the same size as the source, however,
it is later passed to xc_vcpu_setaffinity() which expects it to be
sized to the max number of CPUs

Instead, we should allow copying the (smaller) bitmap read by the parser
and keep the rest of bytes in the destination map unmodified (zero in
this case)

What about copying large bitmap to a smaller one? Say, you migrate to
a host whose number of cpus is smaller than the size of the source
bitmap. Is this a valid use case?

Should we have a "best effort" copy function? That is,

   size = min(source_size, destination_size)
   copy(source, destination, size)

Right, I haven't thought about it for the reversed case.

-boris



In any case I think this is a regression and should be fixed for 4.5.

Wei.

Signed-off-by: Boris Ostrovsky <boris.ostrovsky@xxxxxxxxxx>
---
  tools/libxl/libxl.c       |    4 ++--
  tools/libxl/libxl_utils.c |    7 +++++++
  tools/libxl/libxl_utils.h |    2 ++
  3 files changed, 11 insertions(+), 2 deletions(-)

diff --git a/tools/libxl/libxl.c b/tools/libxl/libxl.c
index f7961f6..84fd2ca 100644
--- a/tools/libxl/libxl.c
+++ b/tools/libxl/libxl.c
@@ -5319,7 +5319,7 @@ int libxl_set_vcpuaffinity(libxl_ctx *ctx, uint32_t 
domid, uint32_t vcpuid,
          if (rc)
              goto out;
- libxl_bitmap_copy(ctx, &hard, cpumap_hard);
+        libxl_bitmap_copy_partial(ctx, &hard, cpumap_hard);
          flags = XEN_VCPUAFFINITY_HARD;
      }
      if (cpumap_soft) {
@@ -5327,7 +5327,7 @@ int libxl_set_vcpuaffinity(libxl_ctx *ctx, uint32_t 
domid, uint32_t vcpuid,
          if (rc)
              goto out;
- libxl_bitmap_copy(ctx, &soft, cpumap_soft);
+        libxl_bitmap_copy_partial(ctx, &soft, cpumap_soft);
          flags |= XEN_VCPUAFFINITY_SOFT;
      }
diff --git a/tools/libxl/libxl_utils.c b/tools/libxl/libxl_utils.c
index 58df4f3..2a08bef 100644
--- a/tools/libxl/libxl_utils.c
+++ b/tools/libxl/libxl_utils.c
@@ -614,6 +614,13 @@ void libxl_bitmap_copy(libxl_ctx *ctx, libxl_bitmap *dptr,
      memcpy(dptr->map, sptr->map, sz * sizeof(*dptr->map));
  }
+void libxl_bitmap_copy_partial(libxl_ctx *ctx, libxl_bitmap *dptr,
+                               const libxl_bitmap *sptr)
+{
+    assert(dptr->size >= sptr->size);
+    memcpy(dptr->map, sptr->map, sptr->size * sizeof(*dptr->map));
+}
+
  void libxl_bitmap_copy_alloc(libxl_ctx *ctx,
                               libxl_bitmap *dptr,
                               const libxl_bitmap *sptr)
diff --git a/tools/libxl/libxl_utils.h b/tools/libxl/libxl_utils.h
index 117b229..d4d0a51 100644
--- a/tools/libxl/libxl_utils.h
+++ b/tools/libxl/libxl_utils.h
@@ -80,6 +80,8 @@ void libxl_bitmap_copy_alloc(libxl_ctx *ctx, libxl_bitmap 
*dptr,
                               const libxl_bitmap *sptr);
  void libxl_bitmap_copy(libxl_ctx *ctx, libxl_bitmap *dptr,
                         const libxl_bitmap *sptr);
+void libxl_bitmap_copy_partial(libxl_ctx *ctx, libxl_bitmap *dptr,
+                               const libxl_bitmap *sptr);
  int libxl_bitmap_is_full(const libxl_bitmap *bitmap);
  int libxl_bitmap_is_empty(const libxl_bitmap *bitmap);
  int libxl_bitmap_test(const libxl_bitmap *bitmap, int bit);
--
1.7.1


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.