[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH v3 8/8] common: convert vCPU info area registration


  • To: Roger Pau Monné <roger.pau@xxxxxxxxxx>
  • From: Jan Beulich <jbeulich@xxxxxxxx>
  • Date: Thu, 28 Sep 2023 11:53:36 +0200
  • Arc-authentication-results: i=1; mx.microsoft.com 1; spf=pass smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com; dkim=pass header.d=suse.com; arc=none
  • Arc-message-signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=bWHiHjI85xQV1NOYpVZbE1Zb9wgZbEFDfLsTzhJ3dCo=; b=Jy0utQeW38SE1Ay8cx3Av/BHSvTGzGKjZ+YC5h11tIc3Oi4vO88q3t8evBRO+uo5AOvp/0JuBH+yW8GzIXr24Q2fJxDPus4MCImXljeqhFOszfBD6JRpaEUePrlFT6GiJJCFItdvelujF56ce5dXmHA/ZKzwacpcH65HWvhJ7nnZF4l+mvPMoTjaTKtmTWKF1yFGUidckkBnwPw8Vf7aPm96BZgqw/KU+f+E6OQRaXy1KmnfwKjTFkatTminm2E1scIHKCbA7bvszBjwVl5HjbnuGKEEPvD/RJWKAUs/Lo1wc9blyaGZvha6hsg5MVsaUcyZniurFN/7yWAbDJC6Zg==
  • Arc-seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=MKavI3ewF4WLevuJEm8ZB+Z2/Z/KRfbDjfy9C40v40NEb2XLYJdwKB/T4Ljx0rNCYVE/VMFtNkonafURFJoBxcEEsBF6/Iymh5bh1XpTptWfEnj4jnPX8qEkvmV0MqqBTRjUNHye8r3wPCn2ZaTtF35uQArDUqribr02Q2Q6QRQXRAmh2k+CrJKlmtNQ0rb3EI1ygEio3PASSuMcTmz+TJlW4xIOLvSvtIAt3dONR6KqSP74+Mn89GG4XxifBiF1pHWZFhuITtFOK4j7snNv2bzx+NaO4wOBR2e554c4PQ3D82CrRRpsv5Yne+pO61hIVCWaP/qYo/dOhVQi2Fju3g==
  • Authentication-results: dkim=none (message not signed) header.d=none;dmarc=none action=none header.from=suse.com;
  • Cc: "xen-devel@xxxxxxxxxxxxxxxxxxxx" <xen-devel@xxxxxxxxxxxxxxxxxxxx>, Andrew Cooper <andrew.cooper3@xxxxxxxxxx>, George Dunlap <george.dunlap@xxxxxxxxxx>, Julien Grall <julien@xxxxxxx>, Stefano Stabellini <sstabellini@xxxxxxxxxx>, Wei Liu <wl@xxxxxxx>, Tamas K Lengyel <tamas@xxxxxxxxxxxxx>
  • Delivery-date: Thu, 28 Sep 2023 09:53:46 +0000
  • List-id: Xen developer discussion <xen-devel.lists.xenproject.org>

On 28.09.2023 10:24, Roger Pau Monné wrote:
> On Wed, May 03, 2023 at 05:58:30PM +0200, Jan Beulich wrote:
>> Switch to using map_guest_area(). Noteworthy differences from
>> map_vcpu_info():
>> - remote vCPU-s are paused rather than checked for being down (which in
>>   principle can change right after the check),
>> - the domain lock is taken for a much smaller region,
>> - the error code for an attempt to re-register the area is now -EBUSY,
>> - we could in principle permit de-registration when no area was
>>   previously registered (which would permit "probing", if necessary for
>>   anything).
>>
>> Note that this eliminates a bug in copy_vcpu_settings(): The function
>> did allocate a new page regardless of the GFN already having a mapping,
>> thus in particular breaking the case of two vCPU-s having their info
>> areas on the same page.
>>
>> Signed-off-by: Jan Beulich <jbeulich@xxxxxxxx>
> 
> Some minor comments below:
> 
> Acked-by: Roger Pau Monné <roger.pau@xxxxxxxxxx>

Thanks.

>> --- a/xen/common/domain.c
>> +++ b/xen/common/domain.c
>> @@ -127,10 +127,10 @@ static void vcpu_info_reset(struct vcpu
>>  {
>>      struct domain *d = v->domain;
> 
> d could likely be made const?

Probably, but this is just context here.

>> @@ -1633,14 +1528,44 @@ int map_guest_area(struct vcpu *v, paddr
>>  
>>      domain_lock(d);
>>  
>> -    if ( map )
>> -        populate(map, v);
>> +    /* No re-registration of the vCPU info area. */
>> +    if ( area != &v->vcpu_info_area || !area->pg )
> 
> It would be nice if this check could be done earlier, as to avoid
> having to fetch and map the page just to discard it.  That would
> however require taking the domain lock earlier.

In order to kind of balance the conflicting goals, there is an unlocked
early check in the caller. See also the related RFC remark; I might
interpret your remark as "yes, keep the early check".

>> +    {
>> +        if ( map )
>> +            populate(map, v);
>>  
>> -    SWAP(area->pg, pg);
>> -    SWAP(area->map, map);
>> +        SWAP(area->pg, pg);
>> +        SWAP(area->map, map);
>> +    }
>> +    else
>> +        rc = -EBUSY;
>>  
>>      domain_unlock(d);
>>  
>> +    /* Set pending flags /after/ new vcpu_info pointer was set. */
>> +    if ( area == &v->vcpu_info_area && !rc )
>> +    {
>> +        /*
>> +         * Mark everything as being pending just to make sure nothing gets
>> +         * lost.  The domain will get a spurious event, but it can cope.
>> +         */
>> +#ifdef CONFIG_COMPAT
>> +        if ( !has_32bit_shinfo(d) )
>> +        {
>> +            vcpu_info_t *info = area->map;
>> +
>> +            /* For VCPUOP_register_vcpu_info handling in common_vcpu_op(). 
>> */
>> +            BUILD_BUG_ON(sizeof(*info) != sizeof(info->compat));
>> +            write_atomic(&info->native.evtchn_pending_sel, ~0);
>> +        }
>> +        else
>> +#endif
>> +            write_atomic(&vcpu_info(v, evtchn_pending_sel), ~0);
> 
> Can't the setting of evtchn_pending_sel be done in
> vcpu_info_populate()?

No, see the comment ahead of the outer if(). populate() needs calling
ahead of updating the pointer.

>> @@ -1801,6 +1729,27 @@ bool update_runstate_area(struct vcpu *v
>>      return rc;
>>  }
>>  
>> +/*
>> + * This makes sure that the vcpu_info is always pointing at a valid piece of
>> + * memory, and it sets a pending event to make sure that a pending event
>> + * doesn't get missed.
>> + */
>> +static void cf_check
>> +vcpu_info_populate(void *map, struct vcpu *v)
>> +{
>> +    vcpu_info_t *info = map;
>> +
>> +    if ( v->vcpu_info_area.map == &dummy_vcpu_info )
>> +    {
>> +        memset(info, 0, sizeof(*info));
>> +#ifdef XEN_HAVE_PV_UPCALL_MASK
>> +        __vcpu_info(v, info, evtchn_upcall_mask) = 1;
>> +#endif
> 
> I'm not sure about the point of those guards, this will always be 1,
> as we always build the hypervisor with the headers in xen/public?

That's the case on x86, but this is common code, and hence the build would
break on other architectures if the guard wasn't there.

> Is it to make backports easier?

I'm afraid I don't see how backporting considerations could play into here.

Jan



 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.