[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH net-next v7 4/9] xen-netback: Introduce TX grant mapping



Pulling out this one comment for the attention on the core Xen/Linux
maintainers.

On Thu, 2014-03-06 at 21:48 +0000, Zoltan Kiss wrote:
[...]
> @@ -343,8 +347,26 @@ struct xenvif *xenvif_alloc(struct device *parent, 
> domid_t domid,
>       vif->pending_prod = MAX_PENDING_REQS;
>       for (i = 0; i < MAX_PENDING_REQS; i++)
>               vif->pending_ring[i] = i;
> -     for (i = 0; i < MAX_PENDING_REQS; i++)
> -             vif->mmap_pages[i] = NULL;
> +     spin_lock_init(&vif->callback_lock);
> +     spin_lock_init(&vif->response_lock);
> +     /* If ballooning is disabled, this will consume real memory, so you
> +      * better enable it. The long term solution would be to use just a
> +      * bunch of valid page descriptors, without dependency on ballooning
> +      */

I wonder if we ought to enforce this via Kconfig? i.e. making
CONFIG_XEN_BACKEND (or the individual backends) depend on BALLOON (or
select?) or by making CONFIG_XEN_BALLOON non-optional etc.

IIRC David V was looking into a solution involving auto hotplugging a
new region to use for this case, but then I guess
CONFIG_XEN_BALLOON_MEMORY_HOTPLUG would equally need to be enabled.

> +     err = alloc_xenballooned_pages(MAX_PENDING_REQS,
> +                                    vif->mmap_pages,
> +                                    false);
[...]


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.