[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [V7 PATCH 3/7] pvh dom0: implement XENMEM_add_to_physmap_range for x86



>>> On 18.12.13 at 11:07, Ian Campbell <Ian.Campbell@xxxxxxxxxx> wrote:
> On Wed, 2013-12-18 at 07:55 +0000, Jan Beulich wrote:
>> >>> On 17.12.13 at 16:11, "Jan Beulich" <JBeulich@xxxxxxxx> wrote:
>> >>>> On 17.12.13 at 15:40, Ian Campbell <Ian.Campbell@xxxxxxxxxx> wrote:
>> >> On Tue, 2013-12-17 at 14:36 +0000, Jan Beulich wrote:
>> >>> >>> On 17.12.13 at 14:59, Ian Campbell <Ian.Campbell@xxxxxxxxxx> wrote:
>> >>> > We could change the code but we could also tighten the interface
>> >>> > requirements, either by explicit specifying that the range is handled 
>> >>> > in
>> >>> > reverse order or by mandating that index/gpfn must not be repeated
>> >>> > (whether or not we actively try and detect such cases).
>> >>> 
>> >>> Specifying that this gets processed backwards would be, well,
>> >>> backwards. Requiring no duplicates (or else getting undefined
>> >>> behavior) would be possible. But processing the operation in the
>> >>> conventional order doesn't seem all that hard.
>> >> 
>> >> The reason I thought it would be tricky was finding somewhere to stash
>> >> the progress over the continuation. Do you have a cunning plan?
>> > 
>> > Just like we do in other cases - in the struct that was passed to
>> > us by the caller (incrementing the handles and decrementing the
>> > count as needed).
>> 
>> And I was wrong with this - there's no proper precedent of us
>> modifying hypercall interface structures except where certain
>> fields are specified to be outputs.
>> 
>> For XENMEM_add_to_physmap_range none of the fields is, so
>> even copying back the size field (like currently done on ARM,
>> and like also done for XENMEM_add_to_physmap's
>> XENMAPSPACE_gmfn_range sub-case) isn't really correct.
>> Instead the general method for encoding the continuation
>> point in mem-ops is to put the resume index in the high bits of
>> the first hypercall argument.
>> 
>> Whether we want to change the specification here (clarifying
>> that all of the structure may be modified by the hypervisor in
>> the course of executing the hypercall) instead of fixing the
>> implementation is open for discussion.
> 
> Isn't x86's xenmem_add_to_physmap precedent here? It modifies idx, gpfn
> and size which are not specified as outputs (more by omission than being
> explicitly inputs, but the implication of the comments is that they are
> inputs).

Right, as I said above. Yet that went in without anyone noticing
the inconsistent behavior, and hence I'd call it a bug.

> I'm happy to change the spec here. I think in general very few guests
> are going to be relying on the guest handle after the hypercall (they
> have the original array in their hand) and in this specific case I know
> that neither the x86 nor ARM implementation on Linux do so, and those
> are the only existing dom0s which use this h/call right now I think.

If a kernel chose to use static (say per-CPU) arrays and a static
interface structure here, it might easily set the handles just once...

> The alternative is to use MEMOP_EXTENT_SHIFT/MEMOP_CMD_MASK? I'd rather
> avoid that if we don't have to...

Admittedly it's not the nicest model, but that's how other memops
work. Question really is whether we want to allow the inconsistency
here, or generally allow the modification of documented to be input
only interface structure fields.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.