[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH v3 4/8] x86/mem-sharing: copy GADDR based shared guest areas


  • To: Roger Pau Monné <roger.pau@xxxxxxxxxx>, Tamas K Lengyel <tamas@xxxxxxxxxxxxx>
  • From: Jan Beulich <jbeulich@xxxxxxxx>
  • Date: Wed, 27 Sep 2023 17:11:23 +0200
  • Arc-authentication-results: i=1; mx.microsoft.com 1; spf=pass smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com; dkim=pass header.d=suse.com; arc=none
  • Arc-message-signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=IOk+QKweONi2z8bAnwX4knNyXssB/MCcxlv0QAawzjg=; b=ma4Gv1zjDwfjoPUTYDpMaH/AHlDAE6EwbSeA8tyAo0nPkGVMo8i80eBZmYUbxEou2yzabjcdbS4PSMw16gdGEYSED8cem1fN2JFq8wvOZ6Q7ux5IShcMbO5Zf9S+fYJwi7qJr6hPSlM+MTI9yzsDh/OsBFZEeW1lX9skKS/SufKSB30sezXfnNGi7l06CVb/BnnlkkSs4NbqKcpVpEd/rXpjPitttMEz9Q5CHNw3owpYVwCUs8+7BGruKylBofoVh3CD+T1k4NcybLwbRwV3EyjkGzTDHGZBMxiL4zZ7jzmMZPlq3kt/5+nqn+qv5eGn4YAVMLd/Q8KG0qftxwLaFA==
  • Arc-seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=DRqpLWm28QAOGaQ6Ft1Fc6Z+kfFTTwS6mrE2FrdRS0D32O89P1/e6Ah+H9Pv/EMM7jwyZvyERWhxDW08KAIehvGXqcGlNBJBdryME3syh+xCd5ei5PnD858H6nSr3sRa5xvUjgv/jMtza02WBWCQLUDpTlMnyisc9y8M3KjA4mnPFLixpzvzjHqr2n9b7pwCcXwN1aH3OieYEinZrEQhsG+kAfh2oZDf9lBXTHT5l+PQ8MJt6zd+9m8xdLerIbshZGRwfkYdGJ1PPJrbC8X0lnoT6cr6lb9XQk0ysXIH5y8MErQfmGpu19tmCMws8j5ShC/2w/STFGJHuAhjWgt5lg==
  • Authentication-results: dkim=none (message not signed) header.d=none;dmarc=none action=none header.from=suse.com;
  • Cc: "xen-devel@xxxxxxxxxxxxxxxxxxxx" <xen-devel@xxxxxxxxxxxxxxxxxxxx>, Andrew Cooper <andrew.cooper3@xxxxxxxxxx>, George Dunlap <george.dunlap@xxxxxxxxxx>, Julien Grall <julien@xxxxxxx>, Stefano Stabellini <sstabellini@xxxxxxxxxx>, Wei Liu <wl@xxxxxxx>
  • Delivery-date: Wed, 27 Sep 2023 15:11:32 +0000
  • List-id: Xen developer discussion <xen-devel.lists.xenproject.org>

On 27.09.2023 16:05, Roger Pau Monné wrote:
> On Wed, Sep 27, 2023 at 02:06:47PM +0200, Jan Beulich wrote:
>> On 27.09.2023 13:08, Roger Pau Monné wrote:
>>> On Wed, May 03, 2023 at 05:56:46PM +0200, Jan Beulich wrote:
>>>> --- a/xen/arch/x86/mm/mem_sharing.c
>>>> +++ b/xen/arch/x86/mm/mem_sharing.c
>>>> @@ -1641,6 +1641,68 @@ static void copy_vcpu_nonreg_state(struc
>>>>      hvm_set_nonreg_state(cd_vcpu, &nrs);
>>>>  }
>>>>  
>>>> +static int copy_guest_area(struct guest_area *cd_area,
>>>> +                           const struct guest_area *d_area,
>>>> +                           struct vcpu *cd_vcpu,
>>>> +                           const struct domain *d)
>>>> +{
>>>> +    mfn_t d_mfn, cd_mfn;
>>>> +
>>>> +    if ( !d_area->pg )
>>>> +        return 0;
>>>> +
>>>> +    d_mfn = page_to_mfn(d_area->pg);
>>>> +
>>>> +    /* Allocate & map a page for the area if it hasn't been already. */
>>>> +    if ( !cd_area->pg )
>>>> +    {
>>>> +        gfn_t gfn = mfn_to_gfn(d, d_mfn);
>>>> +        struct p2m_domain *p2m = p2m_get_hostp2m(cd_vcpu->domain);
>>>> +        p2m_type_t p2mt;
>>>> +        p2m_access_t p2ma;
>>>> +        unsigned int offset;
>>>> +        int ret;
>>>> +
>>>> +        cd_mfn = p2m->get_entry(p2m, gfn, &p2mt, &p2ma, 0, NULL, NULL);
>>>> +        if ( mfn_eq(cd_mfn, INVALID_MFN) )
>>>> +        {
>>>> +            struct page_info *pg = alloc_domheap_page(cd_vcpu->domain, 0);
>>>> +
>>>> +            if ( !pg )
>>>> +                return -ENOMEM;
>>>> +
>>>> +            cd_mfn = page_to_mfn(pg);
>>>> +            set_gpfn_from_mfn(mfn_x(cd_mfn), gfn_x(gfn));
>>>> +
>>>> +            ret = p2m->set_entry(p2m, gfn, cd_mfn, PAGE_ORDER_4K, 
>>>> p2m_ram_rw,
>>>> +                                 p2m->default_access, -1);
>>>> +            if ( ret )
>>>> +                return ret;
>>>> +        }
>>>> +        else if ( p2mt != p2m_ram_rw )
>>>> +            return -EBUSY;
>>>
>>> Shouldn't the populate of the underlying gfn in the fork case
>>> be done by map_guest_area itself?
>>
>> I've used the existing logic for the info area to base my code on. As
>> noted in the patch switching the info area logic to use the generalize
>> infrastructure, I've taken the liberty to address an issue in the
>> original logic. But it was never a goal to re-write things from scratch.
>> If, as Tamas appears to agree, there a better way of layering things
>> here, then please as a follow-on patch.
> 
> Hm, I'm unsure the code that allocates the page and adds it to the p2m
> for the vcpu_info area is required?  map_vcpu_info() should already be
> able to handle this case and fork the page from the previous VM.

I don't think that's the case. It would be able to unshare, but the fork
doesn't use shared pages aiui. I think it instead runs on an extremely
sparse p2m, where pages from the parent are brought in only as they're
touched. Tamas?

Jan



 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.