[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH v3 1/2] x86/mem-sharing: Bulk mem-sharing entire domains



>>> On 16.10.15 at 19:02, <tamas@xxxxxxxxxxxxx> wrote:
> On Fri, Oct 16, 2015 at 12:46 AM, Jan Beulich <JBeulich@xxxxxxxx> wrote:
>> >>> On 15.10.15 at 20:09, <tamas@xxxxxxxxxxxxx> wrote:
>> > +                    rc = -EFAULT;
>> > +                else
>> > +                    rc = 
>> > hypercall_create_continuation(__HYPERVISOR_memory_op,
>> > +                                                       "lh", 
>> > XENMEM_sharing_op,
>> > +                                                       arg);
>> > +            } else {
>>
>> Coding style.
> 
> Coding style matches the rest of the file.

Sorry, but this is not Xen coding style, and hence if there are other
bad examples in the file, these shouldn't be used as an excuse.

>> > +                mso.u.bulk.applied = atomic_read(&d->shr_pages);
>>
>> Is there no possibility for something else to also modify d->shr_pages
>> in parallel, misguiding the caller when looking at the output field. Also
>> you're not copying this back to guest memory...
> 
> It shouldn't be an issue as I don't really care about how many pages were
> shared by the operation itself, I care about the total number of pages
> shared on the domain. If there were already some pages shared before this
> operation, those should be counted here again. I could of course issue a
> separate getdomaininfo to get this same information, it's just more
> expedient to report it right away instead of having to do a separate call.
> By default shr_pages will be equivalent to the number of pages successfully
> shared during this operation. If someone decides to also do unsharing in
> parallel to this op (..how would that make sense?).. well, that's not
> supported right now so all bets are off from my perspective.

But the field being named "applied" suggests a different meaning.

> Also, we are copying it back to guest memory when the operation finishes
> for all mso. It's not bulk specific, applies to all !rc cases further down.

Oh, okay.

>> > @@ -482,7 +483,16 @@ struct xen_mem_sharing_op {
>> >              uint64_aligned_t client_gfn;    /* IN: the client gfn */
>> >              uint64_aligned_t client_handle; /* IN: handle to the client
>> page */
>> >              domid_t  client_domain; /* IN: the client domain id */
>> > -        } share;
>> > +        } share;
>> > +        struct mem_sharing_op_bulk {         /* OP_BULK_SHARE */
>> > +            domid_t client_domain;           /* IN: the client domain
>> id */
>>
>> Explicit padding here please (albeit I see already from the context
>> that this isn't being done in e.g. the share sub-structure above
>> either).
> 
> Not sure what you mean by explicit padding here. The way it's right now
> matches pretty much what was already in place.

domid_t is a 16-bit value, i.e. there's a gap after this field which
would better be made explicit (as we do in many places elsewhere,
albeit - as said - sadly not in the example visible in patch context
here).

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.