|
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [PATCH v3 1/4] xen/domctl, tools: Introduce a new domctl to get guest memory map
On 03.04.2024 10:16, Henry Wang wrote:
> --- a/tools/libs/ctrl/xc_domain.c
> +++ b/tools/libs/ctrl/xc_domain.c
> @@ -697,6 +697,39 @@ int xc_domain_setmaxmem(xc_interface *xch,
> return do_domctl(xch, &domctl);
> }
>
> +int xc_get_domain_mem_map(xc_interface *xch, uint32_t domid,
> + struct xen_mem_region mem_regions[],
> + uint32_t *nr_regions)
> +{
> + int rc;
> + struct xen_domctl domctl = {
> + .cmd = XEN_DOMCTL_get_mem_map,
> + .domain = domid,
> + .u.mem_map = {
> + .nr_mem_regions = *nr_regions,
> + .pad = 0,
This isn't needed: By there being an initializer for the struct, all
unmentioned fields will be set to 0 anyway.
> --- a/xen/arch/arm/domain.c
> +++ b/xen/arch/arm/domain.c
> @@ -696,6 +696,7 @@ int arch_domain_create(struct domain *d,
> {
> unsigned int count = 0;
> int rc;
> + struct mem_map_domain *mem_map = &d->arch.mem_map;
>
> BUILD_BUG_ON(GUEST_MAX_VCPUS < MAX_VIRT_CPUS);
>
> @@ -785,6 +786,20 @@ int arch_domain_create(struct domain *d,
> d->arch.sve_vl = config->arch.sve_vl;
> #endif
>
> + if ( mem_map->nr_mem_regions < XEN_MAX_MEM_REGIONS )
> + {
> + mem_map->regions[mem_map->nr_mem_regions].start = GUEST_MAGIC_BASE;
> + mem_map->regions[mem_map->nr_mem_regions].size = GUEST_MAGIC_SIZE;
> + mem_map->regions[mem_map->nr_mem_regions].type =
> GUEST_MEM_REGION_MAGIC;
> + mem_map->nr_mem_regions++;
> + }
> + else
> + {
> + printk("Exceed max number of supported memory map regions\n");
Debugging leftover?
> --- a/xen/arch/arm/domctl.c
> +++ b/xen/arch/arm/domctl.c
> @@ -148,7 +148,6 @@ long arch_do_domctl(struct xen_domctl *domctl, struct
> domain *d,
>
> return 0;
> }
> -
> case XEN_DOMCTL_vuart_op:
> {
> int rc;
Why? Instead you want ...
> @@ -176,6 +175,37 @@ long arch_do_domctl(struct xen_domctl *domctl, struct
> domain *d,
>
> return rc;
> }
> + case XEN_DOMCTL_get_mem_map:
... separating blank line above this line and ...
> + {
> + int rc = 0;
> + uint32_t nr_regions, i;
> +
> + if ( domctl->u.mem_map.pad )
> + return -EINVAL;
> +
> + /*
> + * Cap the number of regions to the minimum value between toolstack
> and
> + * hypervisor to avoid overflowing the buffer.
> + */
> + nr_regions = min(d->arch.mem_map.nr_mem_regions,
> + domctl->u.mem_map.nr_mem_regions);
> +
> + domctl->u.mem_map.nr_mem_regions = nr_regions;
> +
> + for ( i = 0; i < nr_regions; i++ )
> + {
> + if ( d->arch.mem_map.regions[i].pad )
> + return -EINVAL;
> + }
> +
> + if ( copy_to_guest(domctl->u.mem_map.buffer,
> + d->arch.mem_map.regions,
> + nr_regions) ||
> + __copy_to_guest(u_domctl, domctl, 1) )
> + rc = -EFAULT;
> +
> + return rc;
> + }
> default:
... this one.
Further with the way you use min() above, how is the caller going to know
whether it simply specified too small an array?
And then you check d->arch.mem_map.regions[i].pad. Why's that? And even
if needed here for some reason, that's surely not EINVAL, but an internal
error in Xen.
Finally instead of __copy_to_guest() can't you use __copy_field_to_guest(),
for just nr_regions?
> --- a/xen/include/public/domctl.h
> +++ b/xen/include/public/domctl.h
> @@ -946,6 +946,31 @@ struct xen_domctl_paging_mempool {
> uint64_aligned_t size; /* Size in bytes. */
> };
>
> +#ifndef XEN_MAX_MEM_REGIONS
> +#define XEN_MAX_MEM_REGIONS 1
> +#endif
> +
> +struct xen_mem_region {
> + uint64_aligned_t start;
> + uint64_aligned_t size;
> + uint32_t type;
What is this field set to? I see no #define(s) in this header. If it's
the GUEST_MEM_REGION_* in the Arm header, a connection needs to be made.
Also note that GUEST_MEM_REGION_* violate name space requirements: New
additions should have XEN_ / xen_ prefixes on their names.
> + /* Must be zero */
> + uint32_t pad;
This, being OUT only, should not be required to be set by the caller. As
long as no use appears, Xen merely ought to guarantee that it'll be 0 upon
return.
Jan
|
![]() |
Lists.xenproject.org is hosted with RackSpace, monitoring our |