[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] Blocking CR and MSR writes via mem_access?


  • To: Tamas K Lengyel <tamas.lengyel@xxxxxxxxxxxx>
  • From: Razvan Cojocaru <rcojocaru@xxxxxxxxxxxxxxx>
  • Date: Tue, 07 Oct 2014 13:48:07 +0300
  • Cc: Andrew Cooper <andrew.cooper3@xxxxxxxxxx>, "xen-devel@xxxxxxxxxxxxx" <xen-devel@xxxxxxxxxxxxx>
  • Comment: DomainKeys? See http://domainkeys.sourceforge.net/
  • Delivery-date: Tue, 07 Oct 2014 10:47:44 +0000
  • Domainkey-signature: a=rsa-sha1; q=dns; c=nofws; s=default; d=bitdefender.com; b=WbX2d/v+K3K4lSuqcVgZCizflyePOMgjSPDf+2L1Lr476etaHMVWYBFH7cHPudQ5RwGBfG+M267BYhVMpxGlLiIs0hw41AIlSbKXXvnn4/m8nV1R/saxgzBS4p2A8j8bGpOn3jMbmO8YYaVHVLATjzwQVNXRznss6sxN0Zw6wtakEkfZNFAUuUILqvloWp16udQL0ox6wngiFYVmrrz08pMUlgbzkSqhUYm2lt4r3thIaQawUKYySTqAPt+uRnJ/+GdWvllD77VFPBWpk8yrkimnpg+wq9xrulF6i94Jr8j/usfilmLAeZEa62CgNcHfSwNYgpgUpUpKrh7hGnkn3w==; h=Received:Received:Received:Received:Received:Message-ID:Date:From:User-Agent:MIME-Version:To:CC:Subject:References:In-Reply-To:Content-Type:Content-Transfer-Encoding:X-BitDefender-Scanner:X-BitDefender-Spam:X-BitDefender-SpamStamp:X-BitDefender-CF-Stamp;
  • List-id: Xen developer discussion <xen-devel.lists.xen.org>

On 10/07/2014 01:21 PM, Razvan Cojocaru wrote:
> On 10/07/2014 11:59 AM, Tamas K Lengyel wrote:
>>
>>
>> On Mon, Oct 6, 2014 at 4:25 PM, Razvan Cojocaru
>> <rcojocaru@xxxxxxxxxxxxxxx <mailto:rcojocaru@xxxxxxxxxxxxxxx>> wrote:
>>
>>     On 10/03/2014 07:22 PM, Tamas K Lengyel wrote:
>>     >
>>     >
>>     > On Fri, Oct 3, 2014 at 2:37 PM, Andrew Cooper 
>> <andrew.cooper3@xxxxxxxxxx <mailto:andrew.cooper3@xxxxxxxxxx>
>>     > <mailto:andrew.cooper3@xxxxxxxxxx <mailto:andrew.cooper3@xxxxxxxxxx>>> 
>> wrote:
>>     >
>>     >     On 03/10/14 13:32, Tamas K Lengyel wrote:
>>     >>
>>     >>     On Thu, Oct 2, 2014 at 12:49 PM, Razvan Cojocaru
>>     >>     <rcojocaru@xxxxxxxxxxxxxxx <mailto:rcojocaru@xxxxxxxxxxxxxxx>
>>     <mailto:rcojocaru@xxxxxxxxxxxxxxx
>>     <mailto:rcojocaru@xxxxxxxxxxxxxxx>>> wrote:
>>     >>
>>     >>         Hello,
>>     >>
>>     >>         Currently hvm_memory_event_cr3() and the other
>>     >>         hvm_memory_event_*()
>>     >>         functions in hvm.c can pause the VCPU and send a mem_event
>>     >>         with the new
>>     >>         value of the respective register, but especially in the
>>     case of CR
>>     >>         events (as opposed to MSR events), this is done _after_ the
>>     >>         value is set
>>     >>         (please see hvm_set_cr3() in hvm.c).
>>     >>
>>     >>         It would be interesting from a memory introspection
>>     >>         application's point
>>     >>         of view to be able to receive a mem_event _before_ the value
>>     >>         is set, and
>>     >>         important to be able to veto the change.
>>     >>
>>     >>         A few questions:
>>     >>
>>     >>         1. Would it be acceptable to move the CR3 event sending code
>>     >>         so that a
>>     >>         mem_access client would receive the event _before_ the
>>     write takes
>>     >>         place? Is this likely to break other mem_event clients that
>>     >>         might rely
>>     >>         on the event being received _after_ the value has been set?
>>     >>
>>     >>
>>     >>     Yes, it would break existing applications.
>>     >>
>>     >>
>>     >>         2. I see that mem_event responses from all these cases (EPT
>>     >>         violations,
>>     >>         CR, MSR) are handled in p2m.c's p2m_mem_access_resume()
>>     (seems
>>     >>         to be
>>     >>         confirmed by testing). Is this correct?
>>     >>
>>     >>         3. What would be the sanest, most elegant way to modify
>>     Xen so
>>     >>         that
>>     >>         after a mem_event reply is being received for one of these
>>     >>         cases (CR,
>>     >>         MSR), the write will then be rejected? I'm asking because, as
>>     >>         always,
>>     >>         ideally this would also benefit other Xen users and an
>>     elegant
>>     >>         patch is
>>     >>         always more likely to find its way into mainline than a quick
>>     >>         hack.
>>     >>
>>     >>
>>     >>     You can already block such writes with the existing post-write
>>     >>     event delivery. If you are continuously watching for writes, you
>>     >>     know what the previous value was (for CR events it is actually
>>     >>     delivered to you by Xen as well as per my recent patch). If you
>>     >>     don't like a particular new value that was set, just reset it to
>>     >>     the value you had / want.
>>     >>
>>     >>     Tamas
>>     >
>>     >     That doesn't work if you join an event listener between the
>>     previous
>>     >     MSR write and one you wish to veto.
>>     >
>>     >
>>     > Yes, that's correct. That's why I said it works if you continuously
>>     > monitor for writes. I think that's a reasonable assumption. We could
>>     > also make the MSR write events deliver the previous value as well
>>     > similar to how the CR events do it. Anyway, AFAIU the hardware traps
>>     > always happen before the write so technically both approaches are
>>     > pre-write from the guest's perspective.
>>
>>     I've actually been looking at this for a bit, and while it's true that
>>     it might work for CR events, it's less clear how that would work for
>>     MSRs.
>>
>>     The CR part might be done in the following fashion:
>>
>>     vcpu_guest_context_any_t ctx;
>>
>>     if (xc_vcpu_getcontext(xch, domain, req.vcpu_id, &ctx) == 0) {
>>             ctx.c.ctrlreg[crNumber] = req.gla; /* old value */
>>             xc_vcpu_setcontext(xch, domain, req.vcpu_id, &ctx);
>>     }
>>
>>     However, only a very small number of MSRs are being offered for
>>     manipulation from dom0 via libxc
>>     (xen/include/public/arch-x86/hvm/save.h):
>>
>>     struct hvm_hw_cpu {
>>     /* ... */
>>         /* msr content saved/restored. */
>>         uint64_t msr_flags;
>>         uint64_t msr_lstar;
>>         uint64_t msr_star;
>>         uint64_t msr_cstar;
>>         uint64_t msr_syscall_mask;
>>         uint64_t msr_efer;
>>         uint64_t msr_tsc_aux;
>>     /* ... */
>>     };
>>
>>     So this would either require expanding the vcpu_guest_context_any_t
>>     information, or adding a new hypercall that sets MSRs (which would wrap
>>     / ultimately end up calling msr_write_intercept(msr, value)).
>>
>>     Am I missing something obvious here, or were you explictly only talking
>>     about CRs?
>>
>>
>> TBH I haven't looked at MSR events in detail. Conceptually I would
>> expect them to be similar to CR events, so I would expect the
>> write-blocking your outline above to be doable for MSRs as well (if the
>> API gives you the required info). I think expanding the structure
>> information to return all MSRs of interest sounds more reasonable then
>> adding a new hypercall but I don't really know the details to be the
>> judge of it.
> 
> The MSR part is actually more finicky than CRs.
> 
> For one, with CR events you can send the new value in .gfn and the old
> value in .gla, but the way MSR events are being sent now, .gfn has the
> MSR address and .gla the value to be written. So
> hvm_memory_event_traps() would need to be modified to maybe use .offset
> too for MSR events, but that doesn't look very clean: if we try to
> modify the MSR events to match the Xen 4.5 CR events, we need to use
> .gla and .gfn for the old and new values, and .offset for MSR address -
> but that breaks compatibility with current MSR event consumers. And if
> we put the MSR address in .offset, it will be a special case in
> hvm_memory_event_traps(), and that's inelegant to say the least.

Sorry, meant to say "if we put the old value in .offset, it will be a
special case [...]".


Thanks,
Razvan

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.