[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH v4 12/34] xen/xsplice: Hypervisor implementation of XEN_XSPLICE_op

>>> On 24.03.16 at 04:13, <konrad.wilk@xxxxxxxxxx> wrote:
> On Wed, Mar 23, 2016 at 07:51:29AM -0600, Jan Beulich wrote:
>> >>> On 15.03.16 at 18:56, <konrad.wilk@xxxxxxxxxx> wrote:
>> And then of course the EXPERT question comes up again. No
>> matter that IanC is no longer around to help with the
>> argumentation, the point he has been making about too many
>> flavors ending up in the wild continues to apply.
> 'too many flavors'? As in different versions of Xen with or without
> these options enabled? 


>> > +    {
>> > +        spin_unlock_recursive(&payload_lock);
>> > +        return -EINVAL;
>> > +    }
>> > +
>> > +    list_for_each_entry( data, &payload_list, list )
>> Aren't you lacking a list->version check prior to entering this loop
>> (which would then mean you don't need to store it below, but only
>> on the error path from that check)?
> No. The toolstack has no idea of what the right version is on the
> first invocation. Which is OK since it gets fresh data (it is
> its first invocation).
> On subsequent invocations we gleefuly populate up to
> min(payload_cnt, ->nr) of data even if the version the toolstack
> provided is different. The toolstack will have to decide to throw away
> the data and retry the hypercall; or print it out as is.

Makes sense, but doesn't really fit with this

+The caller provides:
+ * `version`. Version of the payload. Caller should re-use the field provided 
+    the hypervisor. If the value differs the data is stale.

in the most recent patch 11.

> Here is the newly minted patch with your suggestions hopefully
> implemented to your liking!

I think this immediate providing of a partly next-version patch is
getting unwieldy: I just can't re-review several of these large
patches again every day. I'll look at the entire next version once
you've sent that out.


Xen-devel mailing list



Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.