[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [RFC PATCH v3 3/5] KVM: x86: Add notifications for Heki policy configuration and violation



On Mon, May 06, 2024 at 06:34:53PM GMT, Sean Christopherson wrote:
> On Mon, May 06, 2024, Mickaël Salaün wrote:
> > On Fri, May 03, 2024 at 07:03:21AM GMT, Sean Christopherson wrote:
> > > > ---
> > > > 
> > > > Changes since v1:
> > > > * New patch. Making user space aware of Heki properties was requested by
> > > >   Sean Christopherson.
> > > 
> > > No, I suggested having userspace _control_ the pinning[*], not merely be 
> > > notified
> > > of pinning.
> > > 
> > >  : IMO, manipulation of protections, both for memory (this patch) and CPU 
> > > state
> > >  : (control registers in the next patch) should come from userspace.  I 
> > > have no
> > >  : objection to KVM providing plumbing if necessary, but I think 
> > > userspace needs to
> > >  : to have full control over the actual state.
> > >  : 
> > >  : One of the things that caused Intel's control register pinning series 
> > > to stall
> > >  : out was how to handle edge cases like kexec() and reboot.  Deferring 
> > > to userspace
> > >  : means the kernel doesn't need to define policy, e.g. when to unprotect 
> > > memory,
> > >  : and avoids questions like "should userspace be able to overwrite 
> > > pinned control
> > >  : registers".
> > >  : 
> > >  : And like the confidential VM use case, keeping userspace in the loop 
> > > is a big
> > >  : beneifit, e.g. the guest can't circumvent protections by coercing 
> > > userspace into
> > >  : writing to protected memory.
> > > 
> > > I stand by that suggestion, because I don't see a sane way to handle 
> > > things like
> > > kexec() and reboot without having a _much_ more sophisticated policy than 
> > > would
> > > ever be acceptable in KVM.
> > > 
> > > I think that can be done without KVM having any awareness of CR pinning 
> > > whatsoever.
> > > E.g. userspace just needs to ability to intercept CR writes and inject 
> > > #GPs.  Off
> > > the cuff, I suspect the uAPI could look very similar to MSR filtering.  
> > > E.g. I bet
> > > userspace could enforce MSR pinning without any new KVM uAPI at all.
> > > 
> > > [*] https://lore.kernel.org/all/ZFUyhPuhtMbYdJ76@xxxxxxxxxx
> > 
> > OK, I had concern about the control not directly coming from the guest,
> > especially in the case of pKVM and confidential computing, but I get you
> 
> Hardware-based CoCo is completely out of scope, because KVM has zero 
> visibility
> into the guest (well, SNP technically allows trapping CR0/CR4, but KVM really
> shouldn't intercept CR0/CR4 for SNP guests).
> 
> And more importantly, _KVM_ doesn't define any policies for CoCo VMs.  KVM 
> might
> help enforce policies that are defined by hardware/firmware, but KVM doesn't
> define any of its own.
> 
> If pKVM on x86 comes along, then KVM will likely get in the business of 
> defining
> policy, but until that happens, KVM needs to stay firmly out of the picture.
> 
> > point.  It should indeed be quite similar to the MSR filtering on the
> > userspace side, except that we need another interface for the guest to
> > request such change (i.e. self-protection).
> > 
> > Would it be OK to keep this new KVM_HC_LOCK_CR_UPDATE hypercall but
> > forward the request to userspace with a VM exit instead?  That would
> > also enable userspace to get the request and directly configure the CR
> > pinning with the same VM exit.
> 
> No?  Maybe?  I strongly suspect that full support will need a richer set of 
> APIs
> than a single hypercall.  E.g. to handle kexec(), suspend+resume, emulated 
> SMM,
> and so on and so forth.  And that's just for CR pinning.
> 
> And hypercalls are hampered by the fact that VMCALL/VMMCALL don't allow for
> delegation or restriction, i.e. there's no way for the guest to communicate to
> the hypervisor that a less privileged component is allowed to perform some 
> action,
> nor is there a way for the guest to say some chunk of CPL0 code *isn't* 
> allowed
> to request transition.  Delegation and restriction all has to be done 
> out-of-band.
> 
> It'd probably be more annoying to setup initially, but I think a synthetic 
> device
> with an MMIO-based interface would be more powerful and flexible in the long 
> run.
> Then userspace can evolve without needing to wait for KVM to catch up.
> 
> Actually, potential bad/crazy idea.  Why does the _host_ need to define 
> policy?
> Linux already knows what assets it wants to (un)protect and when.  What's 
> missing
> is a way for the guest kernel to effectively deprivilege and re-authenticate
> itself as needed.  We've been tossing around the idea of paired VMs+vCPUs to
> support VTLs and SEV's VMPLs, what if we usurped/piggybacked those ideas, 
> with a
> bit of pKVM mixed in?
> 
> Borrowing VTL terminology, where VTL0 is the least privileged, userspace 
> launches
> the VM at VTL0.  At some point, the guest triggers the deprivileging sequence 
> and
> userspace creates VTL1.  Userpace also provides a way for VTL0 restrict 
> access to
> its memory, e.g. to effectively make the page tables for the kernel's direct 
> map
> writable only from VTL1, to make kernel text RO (or XO), etc.  And VTL0 could 
> then
> also completely remove its access to code that changes CR0/CR4.
> 
> It would obviously require a _lot_ more upfront work, e.g. to isolate the 
> kernel
> text that modifies CR0/CR4 so that it can be removed from VTL0, but that 
> should
> be doable with annotations, e.g. tag relevant functions with __magic or 
> whatever,
> throw them in a dedicated section, and then free/protect the section(s) at the
> appropriate time.
> 
> KVM would likely need to provide the ability to switch VTLs (or whatever they 
> get
> called), and host userspace would need to provide a decent amount of the 
> backend
> mechanisms and "core" policies, e.g. to manage VTL0 memory, teardown (turn 
> off?)
> VTL1 on kexec(), etc.  But everything else could live in the guest kernel 
> itself.
> E.g. to have CR pinning play nice with kexec(), toss the relevant kexec() 
> code into
> VTL1.  That way VTL1 can verify the kexec() target and tear itself down before
> jumping into the new kernel. 
> 
> This is very off the cuff and have-wavy, e.g. I don't have much of an idea 
> what
> it would take to harden kernel text patching, but keeping the policy in the 
> guest
> seems like it'd make everything more tractable than trying to define an ABI
> between Linux and a VMM that is rich and flexible enough to support all the
> fancy things Linux does (and will do in the future).

Yes, we agree that the guest needs to manage its own policy.  That's why
we implemented Heki for KVM this way, but without VTLs because KVM
doesn't support them.

To sum up, is the VTL approach the only one that would be acceptable for
KVM?  If yes, that would indeed require a *lot* of work for something
we're not sure will be accepted later on.

> 
> Am I crazy?  Or maybe reinventing whatever that McAfee thing was that led to
> Intel implementing EPTP switching?
> 



 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.