[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH v2 4/8] x86/mem-sharing: copy GADDR based shared guest areas


  • To: Tamas K Lengyel <tamas@xxxxxxxxxxxxx>
  • From: Jan Beulich <jbeulich@xxxxxxxx>
  • Date: Thu, 26 Jan 2023 17:48:11 +0100
  • Arc-authentication-results: i=1; mx.microsoft.com 1; spf=pass smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com; dkim=pass header.d=suse.com; arc=none
  • Arc-message-signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=jPds969yM+ezwXn14kl8/MZxAMdHi6ULw6VbqlqSeKU=; b=LyhlGr3wmvTlKdAHGY3+gcUWU4zelstlRLMfFqKlEYIVVykt+pSqvPsaHGLzJVqQZu+hc2ymUnsfLpGurhgDFoA7JWS3XmbW8xM0e0/C2JegC7akDDmAfp7qp7fmVpKS+0eSSceeureCmwra4Vu82rJGrVnghuWMvPdFMEoK/hsD3syi46HttH6swN4662Ao1SbO1SgtQSKw4tOcfpH470cc+sCooJzlS6OwQZTHkpuojdvX7yqvaqNq/w6Ce0cuVHxS/SETbAQVV5+RE65D2hH93o4Zuuv2DbU4tsNwAgUJuzB3ioxObsdveKplWLIxhI6M5i3r9UNuPr5V5Zd80w==
  • Arc-seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=LJ+kCTQo5WUHIzkpK8QPkc8DDNJ7Kn3eQpSpocAdQnN5yikYNfglkqbNDVTjc9Sls/7gEQiWwBGKHz6l+rsAp8ICIP3dLZVGgtGvSYwB02zxhyDhjBSZpMbe0bpvtWIJN8vnvuPbrqMvst3EXi/UF6PfryQhEKrVVsBD7rjDxoIT+w/pzcxYeU2ifUnOPR8JyM3jhy4SwnqImQ7QumDvUBaSj3fW+PP8kS1ZH1qikspbjN8ytUuIPyVQHskxR6naF5JbU/Arc9i/h4wHHNveZq7KYovtaTSM15vvHkJ+/UrnAh0Dwuu3+0FYM3FhZauI4nmnW7yfA6xPhtdeb/ffEQ==
  • Authentication-results: dkim=none (message not signed) header.d=none;dmarc=none action=none header.from=suse.com;
  • Cc: "xen-devel@xxxxxxxxxxxxxxxxxxxx" <xen-devel@xxxxxxxxxxxxxxxxxxxx>, Andrew Cooper <andrew.cooper3@xxxxxxxxxx>, George Dunlap <george.dunlap@xxxxxxxxxx>, Julien Grall <julien@xxxxxxx>, Stefano Stabellini <sstabellini@xxxxxxxxxx>, Wei Liu <wl@xxxxxxx>, Roger Pau Monné <roger.pau@xxxxxxxxxx>
  • Delivery-date: Thu, 26 Jan 2023 16:48:23 +0000
  • List-id: Xen developer discussion <xen-devel.lists.xenproject.org>

On 26.01.2023 16:41, Tamas K Lengyel wrote:
> On Thu, Jan 26, 2023 at 3:14 AM Jan Beulich <jbeulich@xxxxxxxx> wrote:
>>
>> On 25.01.2023 16:34, Tamas K Lengyel wrote:
>>> On Tue, Jan 24, 2023 at 6:19 AM Jan Beulich <jbeulich@xxxxxxxx> wrote:
>>>>
>>>> On 23.01.2023 19:32, Tamas K Lengyel wrote:
>>>>> On Mon, Jan 23, 2023 at 11:24 AM Jan Beulich <jbeulich@xxxxxxxx>
> wrote:
>>>>>> On 23.01.2023 17:09, Tamas K Lengyel wrote:
>>>>>>> On Mon, Jan 23, 2023 at 9:55 AM Jan Beulich <jbeulich@xxxxxxxx>
> wrote:
>>>>>>>> --- a/xen/arch/x86/mm/mem_sharing.c
>>>>>>>> +++ b/xen/arch/x86/mm/mem_sharing.c
>>>>>>>> @@ -1653,6 +1653,65 @@ static void copy_vcpu_nonreg_state(struc
>>>>>>>>      hvm_set_nonreg_state(cd_vcpu, &nrs);
>>>>>>>>  }
>>>>>>>>
>>>>>>>> +static int copy_guest_area(struct guest_area *cd_area,
>>>>>>>> +                           const struct guest_area *d_area,
>>>>>>>> +                           struct vcpu *cd_vcpu,
>>>>>>>> +                           const struct domain *d)
>>>>>>>> +{
>>>>>>>> +    mfn_t d_mfn, cd_mfn;
>>>>>>>> +
>>>>>>>> +    if ( !d_area->pg )
>>>>>>>> +        return 0;
>>>>>>>> +
>>>>>>>> +    d_mfn = page_to_mfn(d_area->pg);
>>>>>>>> +
>>>>>>>> +    /* Allocate & map a page for the area if it hasn't been
> already.
>>>>> */
>>>>>>>> +    if ( !cd_area->pg )
>>>>>>>> +    {
>>>>>>>> +        gfn_t gfn = mfn_to_gfn(d, d_mfn);
>>>>>>>> +        struct p2m_domain *p2m = p2m_get_hostp2m(cd_vcpu->domain);
>>>>>>>> +        p2m_type_t p2mt;
>>>>>>>> +        p2m_access_t p2ma;
>>>>>>>> +        unsigned int offset;
>>>>>>>> +        int ret;
>>>>>>>> +
>>>>>>>> +        cd_mfn = p2m->get_entry(p2m, gfn, &p2mt, &p2ma, 0, NULL,
>>>>> NULL);
>>>>>>>> +        if ( mfn_eq(cd_mfn, INVALID_MFN) )
>>>>>>>> +        {
>>>>>>>> +            struct page_info *pg =
>>> alloc_domheap_page(cd_vcpu->domain,
>>>>>>> 0);
>>>>>>>> +
>>>>>>>> +            if ( !pg )
>>>>>>>> +                return -ENOMEM;
>>>>>>>> +
>>>>>>>> +            cd_mfn = page_to_mfn(pg);
>>>>>>>> +            set_gpfn_from_mfn(mfn_x(cd_mfn), gfn_x(gfn));
>>>>>>>> +
>>>>>>>> +            ret = p2m->set_entry(p2m, gfn, cd_mfn, PAGE_ORDER_4K,
>>>>>>> p2m_ram_rw,
>>>>>>>> +                                 p2m->default_access, -1);
>>>>>>>> +            if ( ret )
>>>>>>>> +                return ret;
>>>>>>>> +        }
>>>>>>>> +        else if ( p2mt != p2m_ram_rw )
>>>>>>>> +            return -EBUSY;
>>>>>>>> +
>>>>>>>> +        /*
>>>>>>>> +         * Simply specify the entire range up to the end of the
>>> page.
>>>>>>> All the
>>>>>>>> +         * function uses it for is a check for not crossing page
>>>>>>> boundaries.
>>>>>>>> +         */
>>>>>>>> +        offset = PAGE_OFFSET(d_area->map);
>>>>>>>> +        ret = map_guest_area(cd_vcpu, gfn_to_gaddr(gfn) + offset,
>>>>>>>> +                             PAGE_SIZE - offset, cd_area, NULL);
>>>>>>>> +        if ( ret )
>>>>>>>> +            return ret;
>>>>>>>> +    }
>>>>>>>> +    else
>>>>>>>> +        cd_mfn = page_to_mfn(cd_area->pg);
>>>>>>>
>>>>>>> Everything to this point seems to be non mem-sharing/forking
> related.
>>>>> Could
>>>>>>> these live somewhere else? There must be some other place where
>>>>> allocating
>>>>>>> these areas happens already for non-fork VMs so it would make sense
> to
>>>>> just
>>>>>>> refactor that code to be callable from here.
>>>>>>
>>>>>> It is the "copy" aspect with makes this mem-sharing (or really fork)
>>>>>> specific. Plus in the end this is no different from what you have
>>>>>> there right now for copying the vCPU info area. In the final patch
>>>>>> that other code gets removed by re-using the code here.
>>>>>
>>>>> Yes, the copy part is fork-specific. Arguably if there was a way to do
>>> the
>>>>> allocation of the page for vcpu_info I would prefer that being
>>> elsewhere,
>>>>> but while the only requirement is allocate-page and copy from parent
>>> I'm OK
>>>>> with that logic being in here because it's really straight forward.
> But
>>> now
>>>>> you also do extra sanity checks here which are harder to comprehend in
>>> this
>>>>> context alone.
>>>>
>>>> What sanity checks are you talking about (also below, where you claim
>>>> map_guest_area() would be used only to sanity check)?
>>>
>>> Did I misread your comment above "All the function uses it for is a
> check
>>> for not crossing page boundaries"? That sounds to me like a simple
> sanity
>>> check, unclear why it matters though and why only for forks.
>>
>> The comment is about the function's use of the range it is being passed.
>> It doesn't say in any way that the function is doing only sanity checking.
>> If the comment wording is ambiguous or unclear, I'm happy to take
>> improvement suggestions.
> 
> Yes, please do, it definitely was confusing while reviewing the patch.

I'm sorry, but what does "please do" mean when I asked for improvement
suggestions? I continue to think the comment is quite clear as is, so
if anything needs adjusting, I'd need to know pretty precisely what it
is that needs adding and/or re-writing. I can't, after all, guess what
your misunderstanding resulted from.

Jan



 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.