[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH v7 for-next 04/12] x86/mmcfg: add handlers for the PVH Dom0 MMCFG areas



>>> On 18.10.17 at 13:40, <roger.pau@xxxxxxxxxx> wrote:
> +int __hwdom_init register_vpci_mmcfg_handler(struct domain *d, paddr_t addr,
> +                                             unsigned int start_bus,
> +                                             unsigned int end_bus,
> +                                             unsigned int seg)
> +{
> +    struct hvm_mmcfg *mmcfg, *new = xmalloc(struct hvm_mmcfg);
> +
> +    ASSERT(is_hardware_domain(d));
> +
> +    if ( !new )
> +        return -ENOMEM;
> +
> +    new->addr = addr + (start_bus << 20);
> +    new->start_bus = start_bus;
> +    new->segment = seg;
> +    new->size = (end_bus - start_bus + 1) << 20;;

Please check end_bus >= start_bus early on in the function.

> +void destroy_vpci_mmcfg(struct list_head *domain_mmcfg)
> +{
> +    while ( !list_empty(domain_mmcfg) )
> +    {
> +        struct hvm_mmcfg *mmcfg = list_first_entry(domain_mmcfg,
> +                                                   struct hvm_mmcfg, next);
> +
> +        list_del(&mmcfg->next);
> +        xfree(mmcfg);
> +    }

For sanity reasons, wouldn't it be better to write-lock
d->arch.hvm_domain.mmcfg_lock for around the loop?

With at least the earlier point taken care of
Reviewed-by: Jan Beulich <jbeulich@xxxxxxxx>

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxxx
https://lists.xenproject.org/mailman/listinfo/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.