[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH 00/11] Alternate p2m: support multiple copies of host p2m



On 01/15/2015 08:15 AM, Tim Deegan wrote:
> Hello,
> 
> Thanks for sending this series - in particular, thank you for sending
> it early in the release cycle!  I'll review some of the patches
> individually but since I expect there will be some changes to come in
> future versions I'm not going to go into too much detail.
> 
> I see there's been some discussion of how this would be useful for an
> out-of-domain inspection tool, but could you talk some more about the
> usefulness of the in-VM callers?  I'm not sure what advantage it
> brings over a kernel playing the same tricks in normal pagetables --
> after all an attacker in the kernel can make hypercalls and so get
> around any p2m restrictions.
> 

Our original motivation for this work is that we have a requirement
internally to enable in-guest agent based memory partitioning for
Windows domains.

There is commercially available software that uses a security
hypervisor to partition memory for Windows running on physical
hardware, and we want to make the same thing possible on
virtualized hardware.

Provided the in-domain agent loads early enough, there are ways to
prevent other code using the hypercalls. In the Windows world,
which is where we see an immediate use for these features, there
are OS features we can use to make sure the agent loads early.
In the context of a Xen domain, we might then need some additions via
extra patches to help prevent rogue actors using the hypercalls, but
it can be done.

We're aware that others are interested in an out-of-domain agent model,
and not necessarily with Windows as the target OS, and we're trying
to accommodate that usage, but we'll need help.

> 
> Looking at the code, the first thing that strikes me about it is that
> you've tried to make a split between common code and VMX-specific
> implementation details.  Thank you for that!  My only reservations
> around that are that some of the naming of things in common code are
> too vmx-specific.
> 
> I think it's probably OK to use 've', though I think that the term
> 'notify' might be better (which you use in the hypercall interface).
> Using 'eptp' in common code is less good, though I think almost
> everywhere in common code, you could just use 'p2m' instead.
> Also, functions like 'find_by_eptp' that are only ever called
> from vmx code don't need to be plumbed through common wrappers.
> Also, I think you can probably s/altp2mhvm/altp2m/ throughout.
>

As I said in discussion with Andrew, my aim was to make it possible
for these same changes to be extensible to AMD processors if they
support multiple copies of whatever their EPT equivalent is, by
simply emulating VMFUNC and #VE. That's why there are some wrappers
in the implementation that appear redundant.
 
> 
> The second thing is how similar some of this is to nested p2m code,
> making me wonder whether it could share more code with that.  It's not
> as much duplication as I had feared, but e.g. altp2m_write_p2m_entry()
> is _identical_ to nestedp2m_write_p2m_entry(), (making the
> copyright claim at the top of the file quite dubious, BTW).
> 

I did initially use nestedp2m_write_p2m_entry directly, but I knew
that wouldn't be acceptable! On this specific point, I would be more
inclined to refactor the normal write entry routine so you can call
it everywhere, since both the nested and alternate ones are simply
a copy of a part of the normal one.

I'm well aware that there are similarities with the nested p2m code.
There are significant issues with that code once you start using it
in earnest.

I've spoken to the maintainers of the nested p2m code on a number of
occasions to discuss my concerns, but I need to be clear that I can not
and will not submit patches that touch that code.

> 
> In order to work towards getting this series merged, I think we have
> four things to consider:
> 
> - General design.  I've made some comments above and some of the other
>   maintainers have replied separately.  Assuming that the case can be
>   made for needing this in the hypervisor at all, I think the overall
>   direction is probably a good one.
> 
> - Feature compatibilty/completeness.  You pointed out yourself that
>   it doesn't work with nested HVM or migration.  I think I'd have to
>   add mem_event/access/paging and PCI passthrough to the list of
>   features that ought to still work.  I'm resigned to the idea that
>   many new features don't work with shadow pagetables. :)
> 

The intention is that mem_event/access should still work. I haven't
specifically looked at paging, but I don't see any fundamental reason
why it shouldn't. PCI passthrough I suspect won't. Does nested HVM
work with migration? Is it simply not acceptable to submit a feature
as experimental, with known compatibility issues? I had assumed that
it was, based on the nested HVM status as documented in the release
notes.

> - Testing and sample code.  If we're to carry this feature in the
>   tree, we'll need at least some code to make use of it; ideally
>   some sort of test we can run to find regressions later.
> 

Understood. As I said in another thread, I'm hoping the community
will be interested enough to help with that. If not, I'll try to
figure something out.

> - Issues of coding style and patch hygiene.  I don't think it's
>   particularly useful to go into that in detail at this stage, but I
>   did see some missing spaces in parentheses, e.g. for ( <-here-> ),
>   and the patch series should be ordered so that the new feature is
>   enabled in the last patch (in particular after 'fix log-dirty handling'!)
> 

The reason I put the fix log-dirty handling patch there is because I wasn't
convinced it would be acceptable, and it fixes a bug that already exists
and remains unfixed in the nested p2m code. IOW, alternate p2m would
work as well as nested p2m without that fix.

Ed


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.