[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH V3] x86/vm_event: block interrupt injection for sync vm_events


  • To: Razvan Cojocaru <rcojocaru@xxxxxxxxxxxxxxx>, Jan Beulich <JBeulich@xxxxxxxx>, Tamas K Lengyel <tamas@xxxxxxxxxxxxx>
  • From: Juergen Gross <jgross@xxxxxxxx>
  • Date: Mon, 14 Jan 2019 15:42:03 +0100
  • Autocrypt: addr=jgross@xxxxxxxx; prefer-encrypt=mutual; keydata= xsBNBFOMcBYBCACgGjqjoGvbEouQZw/ToiBg9W98AlM2QHV+iNHsEs7kxWhKMjrioyspZKOB ycWxw3ie3j9uvg9EOB3aN4xiTv4qbnGiTr3oJhkB1gsb6ToJQZ8uxGq2kaV2KL9650I1SJve dYm8Of8Zd621lSmoKOwlNClALZNew72NjJLEzTalU1OdT7/i1TXkH09XSSI8mEQ/ouNcMvIJ NwQpd369y9bfIhWUiVXEK7MlRgUG6MvIj6Y3Am/BBLUVbDa4+gmzDC9ezlZkTZG2t14zWPvx XP3FAp2pkW0xqG7/377qptDmrk42GlSKN4z76ELnLxussxc7I2hx18NUcbP8+uty4bMxABEB AAHNHkp1ZXJnZW4gR3Jvc3MgPGpncm9zc0BzdXNlLmRlPsLAeQQTAQIAIwUCU4xw6wIbAwcL CQgHAwIBBhUIAgkKCwQWAgMBAh4BAheAAAoJELDendYovxMvi4UH/Ri+OXlObzqMANruTd4N zmVBAZgx1VW6jLc8JZjQuJPSsd/a+bNr3BZeLV6lu4Pf1Yl2Log129EX1KWYiFFvPbIiq5M5 kOXTO8Eas4CaScCvAZ9jCMQCgK3pFqYgirwTgfwnPtxFxO/F3ZcS8jovza5khkSKL9JGq8Nk czDTruQ/oy0WUHdUr9uwEfiD9yPFOGqp4S6cISuzBMvaAiC5YGdUGXuPZKXLpnGSjkZswUzY d9BVSitRL5ldsQCg6GhDoEAeIhUC4SQnT9SOWkoDOSFRXZ+7+WIBGLiWMd+yKDdRG5RyP/8f 3tgGiB6cyuYfPDRGsELGjUaTUq3H2xZgIPfOwE0EU4xwFgEIAMsx+gDjgzAY4H1hPVXgoLK8 B93sTQFN9oC6tsb46VpxyLPfJ3T1A6Z6MVkLoCejKTJ3K9MUsBZhxIJ0hIyvzwI6aYJsnOew cCiCN7FeKJ/oA1RSUemPGUcIJwQuZlTOiY0OcQ5PFkV5YxMUX1F/aTYXROXgTmSaw0aC1Jpo w7Ss1mg4SIP/tR88/d1+HwkJDVW1RSxC1PWzGizwRv8eauImGdpNnseneO2BNWRXTJumAWDD pYxpGSsGHXuZXTPZqOOZpsHtInFyi5KRHSFyk2Xigzvh3b9WqhbgHHHE4PUVw0I5sIQt8hJq 5nH5dPqz4ITtCL9zjiJsExHuHKN3NZsAEQEAAcLAXwQYAQIACQUCU4xwFgIbDAAKCRCw3p3W KL8TL0P4B/9YWver5uD/y/m0KScK2f3Z3mXJhME23vGBbMNlfwbr+meDMrJZ950CuWWnQ+d+ Ahe0w1X7e3wuLVODzjcReQ/v7b4JD3wwHxe+88tgB9byc0NXzlPJWBaWV01yB2/uefVKryAf AHYEd0gCRhx7eESgNBe3+YqWAQawunMlycsqKa09dBDL1PFRosF708ic9346GLHRc6Vj5SRA UTHnQqLetIOXZm3a2eQ1gpQK9MmruO86Vo93p39bS1mqnLLspVrL4rhoyhsOyh0Hd28QCzpJ wKeHTd0MAWAirmewHXWPco8p1Wg+V+5xfZzuQY0f4tQxvOpXpt4gQ1817GQ5/Ed/wsDtBBgB CAAgFiEEhRJncuj2BJSl0Jf3sN6d1ii/Ey8FAlrd8NACGwIAgQkQsN6d1ii/Ey92IAQZFggA HRYhBFMtsHpB9jjzHji4HoBcYbtP2GO+BQJa3fDQAAoJEIBcYbtP2GO+TYsA/30H/0V6cr/W V+J/FCayg6uNtm3MJLo4rE+o4sdpjjsGAQCooqffpgA+luTT13YZNV62hAnCLKXH9n3+ZAgJ RtAyDWk1B/0SMDVs1wxufMkKC3Q/1D3BYIvBlrTVKdBYXPxngcRoqV2J77lscEvkLNUGsu/z W2pf7+P3mWWlrPMJdlbax00vevyBeqtqNKjHstHatgMZ2W0CFC4hJ3YEetuRBURYPiGzuJXU pAd7a7BdsqWC4o+GTm5tnGrCyD+4gfDSpkOT53S/GNO07YkPkm/8J4OBoFfgSaCnQ1izwgJQ jIpcG2fPCI2/hxf2oqXPYbKr1v4Z1wthmoyUgGN0LPTIm+B5vdY82wI5qe9uN6UOGyTH2B3p hRQUWqCwu2sqkI3LLbTdrnyDZaixT2T0f4tyF5Lfs+Ha8xVMhIyzNb1byDI5FKCb
  • Cc: Kevin Tian <kevin.tian@xxxxxxxxx>, Stefano Stabellini <sstabellini@xxxxxxxxxx>, Wei Liu <wei.liu2@xxxxxxxxxx>, Jun Nakajima <jun.nakajima@xxxxxxxxx>, Andrew Cooper <andrew.cooper3@xxxxxxxxxx>, Julien Grall <julien.grall@xxxxxxx>, Suravee Suthikulpanit <suravee.suthikulpanit@xxxxxxx>, xen-devel <xen-devel@xxxxxxxxxxxxxxxxxxxx>, Boris Ostrovsky <boris.ostrovsky@xxxxxxxxxx>, Brian Woods <brian.woods@xxxxxxx>, Roger Pau Monne <roger.pau@xxxxxxxxxx>
  • Delivery-date: Mon, 14 Jan 2019 14:42:10 +0000
  • List-id: Xen developer discussion <xen-devel.lists.xenproject.org>
  • Openpgp: preference=signencrypt

On 14/01/2019 11:56, Razvan Cojocaru wrote:
> On 1/14/19 11:53 AM, Jan Beulich wrote:
>>>>> On 14.01.19 at 10:34, <rcojocaru@xxxxxxxxxxxxxxx> wrote:
>>> On 1/12/19 12:04 AM, Boris Ostrovsky wrote:
>>>> On 12/14/18 6:49 AM, Razvan Cojocaru wrote:
>>>>> Block interrupts (in vmx_intr_assist()) for the duration of
>>>>> processing a sync vm_event (similarly to the strategy
>>>>> currently used for single-stepping). Otherwise, attempting
>>>>> to emulate an instruction when requested by a vm_event
>>>>> reply may legitimately need to call e.g.
>>>>> hvm_inject_page_fault(), which then overwrites the active
>>>>> interrupt in the VMCS.
>>>>>
>>>>> The sync vm_event handling path on x86/VMX is (roughly):
>>>>> monitor_traps() -> process vm_event -> vmx_intr_assist()
>>>>> (possibly writing VM_ENTRY_INTR_INFO) ->
>>>>> hvm_vm_event_do_resume() -> hvm_emulate_one_vm_event()
>>>>> (possibly overwriting the VM_ENTRY_INTR_INFO value).
>>>>>
>>>>> This patch may also be helpful for the future removal
>>>>> of may_defer in hvm_set_cr{0,3,4} and hvm_set_msr().
>>>>>
>>>>> Signed-off-by: Razvan Cojocaru <rcojocaru@xxxxxxxxxxxxxxx>
>>>>
>>>>
>>>> Reviewed-by: Boris Ostrovsky <boris.ostrovsky@xxxxxxxxxx>
>>>
>>> Thanks! So now we have three reviewed-bys, if I'm not mistaken all we
>>> need is Tamas' (for the vm_event part) and Julien / Stefano's (for ARM)
>>> acks (or otherwise).
>>
>> And you'd need to talk Jürgen into allowing this in, now that we're
>> past the freeze point.
> 
> (Adding Jürgen to the conversation.)
> 
> Right, that would be ideal if possible - the code has absolutely no
> impact on anything unless vm_event is active on the domain, which is
> never the case for the use-cases considered for a Xen release.
> 
> So the case I'm making for the patch to go in 4.12 is that:
> 
> 1. It's perfectly harmless (it affects nothing, except for introspection).
> 
> 2. It's trivial and thus very easy to see that it's correct.
> 
> 3. It fixes a major headache for us, and thus it is a great improvement
> from an introspection standpoint (fixes OS crashes / hangs which we'd
> otherwise need to work around in rather painful ways).
> 
> 4. V3 of the patch has been sent out on Dec 14th - it's just that
> reviewers have had other priorities and it did not gather all acks in time.
> 
> However, if it's not possible or desirable to allow this in the next
> best thing is to at least have all the acks necessary for it to go in
> first thing once the freeze is over.
> 
> From Julien's reply I understand that the last ack necessary is Tamas'.

With that ack just arrived:

Release-acked-by: Juergen Gross <jgross@xxxxxxxx>


Juergen

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxxx
https://lists.xenproject.org/mailman/listinfo/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.