[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH 3/3 V3] XSA-60 security hole: cr0.cd handling



Jan Beulich wrote:
>>>> On 28.10.13 at 09:31, "Liu, Jinsong" <jinsong.liu@xxxxxxxxx> wrote:
>> Jan Beulich wrote:
>>> While mentally going through this logic again I noticed, however,
>>> that the cache flushing your patch is doing is still insufficient:
>>> Doing this just when CD gets set and in the context switch path is
>>> not enough. This needs to be done prior to each VM entry, unless it
>>> can be proven that the hypervisor (or the service domain) didn't
>>> touch guest memory.
>> 
>> I think it's safe: it only need guarantee no vcpu guest context
>> involved 
>> into the small window between cache flushing and TLB invalidation --
>> after that it doesn't care whether hypervisor touch guest memory or
>> not, since cache is clear and old memory type in TLB is invalidated
>> (and is UC afterwards), so no cache line will be polluted by guest
>> context any more. 
> 
> No - consider a VM exit while in this mode where, in order to process
> it, the hypervisor or service domain touch guest memory. Such
> touching will happen with caches being used, and hence unwritten
> data may be left in the caches when exiting back to the guest when
> there's no wbinvd on the VM entry path. I don't think anything is
> being said explicitly anywhere on whether cache contents are being
> taken into consideration when CD=0 but PAT enforces UC.
> 

I think we don't need worry about hypervisor touch guest memory when guest UC. 
I draft 2 tests, monitoring various events during guest UC period.

Test 1 add wbinvd before vmentry. When guest booting, it hungs 10s ~ 60s at 
vcpu0 UC stage, with bunches of vmexit caused by lapic timer interrupt storm.
Test 2 is our current patch. It shows during guest UC, vmexit only triggerred 
by interrupt/ CR access/ MSR access/ wbinvd. With these vmexits hypervisor will 
not touch any guest memory.

For test1 lapic timer interrupt storm triggered by the 'wbinvd' (without it 
everything works fine. I didn't dig out the reason -- maybe wbinvd involved 
into tricky vmentry microcode process?), but anyway, test 2 demonstrates that 
hypervisor will not touch guest memory during guest UC period.

Tests log attached below:
=========================================================
Test 1: add wbinvd before vmentry (at vmx_vmenter_helper())
(XEN) uc_vmenter_count = 10607
(XEN) uc_vmexit_count = 10607
(XEN) EXIT-REASON      COUNT
(XEN)        1               10463       // EXIT_REASON_EXTERNAL_INTERRUPT
(XEN)       28                   10       // EXIT_REASON_CR_ACCESS
(XEN)       31                 114       // EXIT_REASON_MSR_READ
(XEN)       32                   15       // EXIT_REASON_MSR_WRITE
(XEN)       54                     5       // EXIT_REASON_WBINVD
(XEN) TOTAL EXIT-REASON-COUNT = 10607
(XEN)
(XEN) vcpu[0] vmentry count = 10492
(XEN) vcpu[1] vmentry count = 37
(XEN) vcpu[2] vmentry count = 40
(XEN) vcpu[3] vmentry count = 38
(XEN) interrupt vec 0xfa occurs 10450 times  // lapic timer
(XEN) interrupt vec 0xfb occurs 13 times       // call function IPI


Test 2: current patch which didn't add wbinvd before vmentry
(XEN) uc_vmenter_count = 147
(XEN) uc_vmexit_count = 147
(XEN) EXIT-REASON      COUNT
(XEN)        1                      3          // EXIT_REASON_EXTERNAL_INTERRUPT
(XEN)       28                    10         // EXIT_REASON_CR_ACCESS
(XEN)       31                  114         // EXIT_REASON_MSR_READ
(XEN)       32                    15         // EXIT_REASON_MSR_WRITE
(XEN)       54                      5         // EXIT_REASON_WBINVD
(XEN) TOTAL EXIT-REASON-COUNT = 147
(XEN)
(XEN) vcpu[0] vmentry count = 45
(XEN) vcpu[1] vmentry count = 34
(XEN) vcpu[2] vmentry count = 34
(XEN) vcpu[3] vmentry count = 34
(XEN) interrupt vec 0xfa occurs 3 times  // lapic timer
==================================================================

Thanks,
Jinsong
_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.