[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH 0/2 for-4.12] Introduce runstate area registration with phys address


  • To: Julien Grall <julien.grall@xxxxxxx>, Roger Pau Monné <roger.pau@xxxxxxxxxx>
  • From: Juergen Gross <jgross@xxxxxxxx>
  • Date: Fri, 8 Mar 2019 11:18:35 +0100
  • Autocrypt: addr=jgross@xxxxxxxx; prefer-encrypt=mutual; keydata= xsBNBFOMcBYBCACgGjqjoGvbEouQZw/ToiBg9W98AlM2QHV+iNHsEs7kxWhKMjrioyspZKOB ycWxw3ie3j9uvg9EOB3aN4xiTv4qbnGiTr3oJhkB1gsb6ToJQZ8uxGq2kaV2KL9650I1SJve dYm8Of8Zd621lSmoKOwlNClALZNew72NjJLEzTalU1OdT7/i1TXkH09XSSI8mEQ/ouNcMvIJ NwQpd369y9bfIhWUiVXEK7MlRgUG6MvIj6Y3Am/BBLUVbDa4+gmzDC9ezlZkTZG2t14zWPvx XP3FAp2pkW0xqG7/377qptDmrk42GlSKN4z76ELnLxussxc7I2hx18NUcbP8+uty4bMxABEB AAHNHkp1ZXJnZW4gR3Jvc3MgPGpncm9zc0BzdXNlLmRlPsLAeQQTAQIAIwUCU4xw6wIbAwcL CQgHAwIBBhUIAgkKCwQWAgMBAh4BAheAAAoJELDendYovxMvi4UH/Ri+OXlObzqMANruTd4N zmVBAZgx1VW6jLc8JZjQuJPSsd/a+bNr3BZeLV6lu4Pf1Yl2Log129EX1KWYiFFvPbIiq5M5 kOXTO8Eas4CaScCvAZ9jCMQCgK3pFqYgirwTgfwnPtxFxO/F3ZcS8jovza5khkSKL9JGq8Nk czDTruQ/oy0WUHdUr9uwEfiD9yPFOGqp4S6cISuzBMvaAiC5YGdUGXuPZKXLpnGSjkZswUzY d9BVSitRL5ldsQCg6GhDoEAeIhUC4SQnT9SOWkoDOSFRXZ+7+WIBGLiWMd+yKDdRG5RyP/8f 3tgGiB6cyuYfPDRGsELGjUaTUq3H2xZgIPfOwE0EU4xwFgEIAMsx+gDjgzAY4H1hPVXgoLK8 B93sTQFN9oC6tsb46VpxyLPfJ3T1A6Z6MVkLoCejKTJ3K9MUsBZhxIJ0hIyvzwI6aYJsnOew cCiCN7FeKJ/oA1RSUemPGUcIJwQuZlTOiY0OcQ5PFkV5YxMUX1F/aTYXROXgTmSaw0aC1Jpo w7Ss1mg4SIP/tR88/d1+HwkJDVW1RSxC1PWzGizwRv8eauImGdpNnseneO2BNWRXTJumAWDD pYxpGSsGHXuZXTPZqOOZpsHtInFyi5KRHSFyk2Xigzvh3b9WqhbgHHHE4PUVw0I5sIQt8hJq 5nH5dPqz4ITtCL9zjiJsExHuHKN3NZsAEQEAAcLAXwQYAQIACQUCU4xwFgIbDAAKCRCw3p3W KL8TL0P4B/9YWver5uD/y/m0KScK2f3Z3mXJhME23vGBbMNlfwbr+meDMrJZ950CuWWnQ+d+ Ahe0w1X7e3wuLVODzjcReQ/v7b4JD3wwHxe+88tgB9byc0NXzlPJWBaWV01yB2/uefVKryAf AHYEd0gCRhx7eESgNBe3+YqWAQawunMlycsqKa09dBDL1PFRosF708ic9346GLHRc6Vj5SRA UTHnQqLetIOXZm3a2eQ1gpQK9MmruO86Vo93p39bS1mqnLLspVrL4rhoyhsOyh0Hd28QCzpJ wKeHTd0MAWAirmewHXWPco8p1Wg+V+5xfZzuQY0f4tQxvOpXpt4gQ1817GQ5/Ed/wsDtBBgB CAAgFiEEhRJncuj2BJSl0Jf3sN6d1ii/Ey8FAlrd8NACGwIAgQkQsN6d1ii/Ey92IAQZFggA HRYhBFMtsHpB9jjzHji4HoBcYbtP2GO+BQJa3fDQAAoJEIBcYbtP2GO+TYsA/30H/0V6cr/W V+J/FCayg6uNtm3MJLo4rE+o4sdpjjsGAQCooqffpgA+luTT13YZNV62hAnCLKXH9n3+ZAgJ RtAyDWk1B/0SMDVs1wxufMkKC3Q/1D3BYIvBlrTVKdBYXPxngcRoqV2J77lscEvkLNUGsu/z W2pf7+P3mWWlrPMJdlbax00vevyBeqtqNKjHstHatgMZ2W0CFC4hJ3YEetuRBURYPiGzuJXU pAd7a7BdsqWC4o+GTm5tnGrCyD+4gfDSpkOT53S/GNO07YkPkm/8J4OBoFfgSaCnQ1izwgJQ jIpcG2fPCI2/hxf2oqXPYbKr1v4Z1wthmoyUgGN0LPTIm+B5vdY82wI5qe9uN6UOGyTH2B3p hRQUWqCwu2sqkI3LLbTdrnyDZaixT2T0f4tyF5Lfs+Ha8xVMhIyzNb1byDI5FKCb
  • Cc: Andrii Anisov <andrii.anisov@xxxxxxxxx>, Stefano Stabellini <sstabellini@xxxxxxxxxx>, Andrii Anisov <andrii_anisov@xxxxxxxx>, xen-devel@xxxxxxxxxxxxx
  • Delivery-date: Fri, 08 Mar 2019 10:18:46 +0000
  • List-id: Xen developer discussion <xen-devel.lists.xenproject.org>
  • Openpgp: preference=signencrypt

On 08/03/2019 11:15, Julien Grall wrote:
> Hi Juergen,
> 
> On 3/8/19 6:28 AM, Juergen Gross wrote:
>> On 07/03/2019 19:00, Julien Grall wrote:
>>> Hi Roger,
>>>
>>> On 07/03/2019 17:15, Roger Pau Monné wrote:
>>>> On Thu, Mar 07, 2019 at 04:36:59PM +0000, Julien Grall wrote:
>>>>> Hi Roger,
>>>>>
>>>>> Thank you for the answer.
>>>>>
>>>>> On 07/03/2019 16:16, Roger Pau Monné wrote:
>>>>>> On Thu, Mar 07, 2019 at 03:17:54PM +0000, Julien Grall wrote:
>>>>>>> Hi Andrii,
>>>>>>>
>>>>>>> On 07/03/2019 14:34, Andrii Anisov wrote:
>>>>>>>> On 07.03.19 16:02, Julien Grall wrote:
>>>>>>>>>>      - IMHO, this implementation is simpler and cleaner than
>>>>>>>>>> what I
>>>>>>>>>> have for runstate mapping on access.
>>>>>>>>>
>>>>>>>>> Did you implement it using access_guest_memory_by_ipa?
>>>>>>>> Not exactly, access_guest_memory_by_ipa() has no implementation
>>>>>>>> for x86.
>>>>>>>> But it is made around that code.
>>>>>>>
>>>>>>> For the HVM, the equivalent function is hvm_copy_to_guest_phys. I
>>>>>>> don't know
>>>>>>> what would be the interface for PV. Roger, any idea?
>>>>>>
>>>>>> For PV I think you will have to use get_page_from_gfn, check the
>>>>>> permissions, map it, write and unmap it. The same flow would also
>>>>>> work
>>>>>> for HVM, so I'm not sure if there's much point in using
>>>>>> hvm_copy_to_guest_phys. Or you can implement a generic
>>>>>> copy_to_guest_phys helper that works for both PV and HVM.
>>>>>>
>>>>>> Note that for translated guests you will have to walk the HAP page
>>>>>> tables for each vCPU for each context switch, which I think will be
>>>>>> expensive in terms of performance (I might be wrong however, since I
>>>>>> have no proof of this).
>>>>>
>>>>> AFAICT, we already walk the page-table with the current
>>>>> implementation. So
>>>>> this should be no different here, except we will not need to walk the
>>>>> guest-PT here. No?
>>>>
>>>> Yes, current implementation is even worse because it walks both the
>>>> guest page tables and the HAP page tables in the HVM case. It would be
>>>> interesting IMO if we could avoid walking any of those page tables.
>>>>
>>>> I see you have concerns about permanently mapping the runstate area,
>>>> so I'm not going to oppose, albeit even with only 1G of VA space you
>>>> can map plenty of runstate areas, and taking into account this is
>>>> 32bit hardware I'm not sure you will ever have that many vCPUs that
>>>> you will run out of VA space to map runstate areas.
>>>
>>> Actually the vmap is only 768MB. The vmap is at the moment used for
>>> mapping:
>>>      - MMIO devices (through ioremap)
>>>          - event channel pages
>>>
>>> As the runstate is far smaller than a page, this sounds like a waste of
>>> memory for a benefits that haven't not yet been shown. Indeed, number
>>> provided by Andrii either show worst performance or similar one.
>>>
>>> But TBH, I am not expecting that a really clear performance improvement
>>> on Arm as there are a lot to do in the context switch.
>>>
>>>>
>>>> That being said, if the implementation turns out to be more
>>>> complicated because of this permanent mapping, walking the guest HAP
>>>> page tables is certainly no worse than what's done ATM.
>>>
>>> To be honest I am not fully against always mapping the runstate in Xen.
>>> But I need data to show this is worth it. So far, the performance
>>> promised are not there and the implementation is not foolproof yet.
>>>
>>> If we want to keep the runstate mapped permanently, then we need to add
>>> either a lock or a refcounting. So the page does not disappear during
>>> context switch if we happen to update the runstate concurrently (via the
>>> hypercall).
>>>
>>> This may increase the complexity of the implementation (not sure by how
>>> much thought).
>>>
>>> Another solution is to prevent the runstate to be updated. But I think
>>> we will just add a bit more burden in the guest OS.
>>
>> Not sure about other systems, but current Linux kernel registers the
>> runstate area for other cpus only if those are not up. So there is no
>> way the runstate of that foreign vcpu could be updated during
>> registering it. Of course this would need to be tested (e.g. -EBUSY
>> for registering runstate of an active foreign vcpu).
> 
> I was under the impression that xen_vcpu_restore() (see
> arch/x86/xen/enlighten.c) may update the runstate while the vCPU is up
> (at least on pre-Xen 4.5). Did I miss anything?

Pre Xen 4.5: yes. But I don't think you want to introduce the new
semantics in Xen 4.4 and older?


Juergen

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxxx
https://lists.xenproject.org/mailman/listinfo/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.