|
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-devel] [PATCH v3 1/2] x86/mem-sharing: Bulk mem-sharing entire domains
>>> On 13.05.16 at 17:31, <tamas@xxxxxxxxxxxxx> wrote:
> On Fri, May 13, 2016 at 9:09 AM, Jan Beulich <JBeulich@xxxxxxxx> wrote:
>>>>> On 13.05.16 at 16:50, <tamas@xxxxxxxxxxxxx> wrote:
>>> On Fri, May 13, 2016 at 6:00 AM, Jan Beulich <JBeulich@xxxxxxxx> wrote:
>>>>>>> On 12.05.16 at 17:25, <tamas@xxxxxxxxxxxxx> wrote:
>>>>> @@ -1468,6 +1505,69 @@ int
>>>>> + if ( rc )
>>>>> + {
>>>>> + rcu_unlock_domain(cd);
>>>>> + goto out;
>>>>> + }
>>>>> +
>>>>> + if ( !mem_sharing_enabled(cd) )
>>>>> + {
>>>>> + rcu_unlock_domain(cd);
>>>>> + rc = -EINVAL;
>>>>> + goto out;
>>>>> + }
>>>>> +
>>>>> + if ( !atomic_read(&d->pause_count) ||
>>>>> + !atomic_read(&cd->pause_count) )
>>>>> + {
>>>>> + rcu_unlock_domain(cd);
>>>>> + rc = -EINVAL;
>>>>> + goto out;
>>>>> + }
>>>>> +
>>>>> + max_sgfn = domain_get_maximum_gpfn(d);
>>>>> + max_cgfn = domain_get_maximum_gpfn(cd);
>>>>> +
>>>>> + if ( max_sgfn != max_cgfn || max_sgfn < mso.u.bulk.start )
>>>>
>>>> Why would the two domains need to agree in their maximum
>>>> GPFN? There's nothing similar in this file so far. Nor does the
>>>> right side of the || match anything pre-existing...
>>>
>>> The use-case for this function is to deduplicate identical VMs, not to
>>> blindly share pages across arbitrary domains. So this is a safety
>>> check to avoid accidentally running this function on domains that
>>> obviously are not identical. The right hand size is a safety check
>>> against not properly initialized input structs where the start point
>>> is obviously outside the memory of the domain.
>>
>> Is that use case the only possible one, or merely the one you care
>> about? In the former case, I'd be okay as long as a respctive
>> comment got added.
>
> I would say "blind" sharing like this only makes sense for identical
> VMs. Replacing the memory of the VM with that of another one not in
> the same state will lead to some spectacular crash for sure, so we
> should do at least some sanity checking before.
Crash? Pages would be shared only if their contents match (and
unshared when they're about to diverge), and once that's
guaranteed, I don't see how it matters whether the two involved
guests are identical. I agree that between dissimilar guests the
potential for sharing is likely much lower, but that's about all.
>>>>> @@ -488,7 +489,18 @@ struct xen_mem_sharing_op {
>>>>> uint64_aligned_t client_gfn; /* IN: the client gfn */
>>>>> uint64_aligned_t client_handle; /* IN: handle to the client
>>>>> page */
>>>>> domid_t client_domain; /* IN: the client domain id */
>>>>> - } share;
>>>>> + } share;
>>>>> + struct mem_sharing_op_bulk { /* OP_BULK_SHARE */
>>>>> + uint64_aligned_t start; /* IN: start gfn. Set to 0
>>>>> for
>>>>> + full deduplication.
>>>>> Field is
>>>>> + used internally and may
>>>>> change
>>>>> + when the hypercall
>>>>> returns. */
>>>>> + uint64_aligned_t shared; /* OUT: the number of gfns
>>>>> + that are shared after
>>>>> this
>>>>> + operation including pages
>>>>> + already shared before */
>>>>> + domid_t client_domain; /* IN: the client domain id
>>>>> */
>>>>> + } bulk;
>>>>
>>>> Let's not repeat pre-existing mistakes: There is explicit padding
>>>> missing here, which then also ought to be checked to be zero on
>>>> input.
>>>
>>> This struct is part of a union and is smaller then largest struct in
>>> the union, even with padding. So how would padding have any effect on
>>> anything?
>>
>> Being able to use the space for future extensions without having
>> to bump the interface version. In domctl-s it's not as important
>> due to them being separately versioned, but anyway.
>
> I don't really follow, we are not growing the union here. This struct
> is still smaller then the space available in the union, so what would
> prevent us from later on expending this struct to the size of the
> union without padding?
If a future hypervisor expects some value in a field that's now
(anonymous) padding, and a caller written against the headers
with just your patch applied passes a structure with random
data in those padding fields to that new hypervisor, what do
you expect to happen?
Jan
_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel
|
![]() |
Lists.xenproject.org is hosted with RackSpace, monitoring our |