|
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-devel] [PATCH] x86/ept: pass correct level to p2m_entry_modify
On Wed, Jul 03, 2019 at 10:22:03AM +0000, Jan Beulich wrote:
> On 03.07.2019 11:43, Roger Pau Monne wrote:
> > EPT differs from NPT and shadow when translating page orders to levels
> > in the physmap page tables. EPT page tables level for order 0 pages is
> > 0, while NPT and shadow instead use 1, ie: EPT page tables levels
> > starts at 0 while NPT and shadow starts at 1.
> >
> > Fix the p2m_entry_modify call in atomic_write_ept_entry to always add
> > one to the level, in order to match NPT and shadow usage.
> >
> > While there also fix p2m_entry_modify BUG condition to trigger when
> > foreign or ioreq entries with level different than 0 are attempted.
> > That should allow to catch future errors related to the level
> > parameter.
> >
> > Fixes: c7a4c0 ('x86/mm: split p2m ioreq server pages special handling into
> > helper')
>
> A 6-digit hash is definitely too short in the long run. I understand
> that this then wants backporting to the 4.12 tree.
Yes.
Is there consensus on how many digits to use 8, 12, 16?
> > --- a/xen/include/asm-x86/p2m.h
> > +++ b/xen/include/asm-x86/p2m.h
> > @@ -946,7 +946,7 @@ static inline int p2m_entry_modify(struct p2m_domain
> > *p2m, p2m_type_t nt,
> > p2m_type_t ot, mfn_t nfn, mfn_t ofn,
> > unsigned int level)
> > {
> > - BUG_ON(level > 1 && (nt == p2m_ioreq_server || nt == p2m_map_foreign));
> > + BUG_ON(level != 1 && (nt == p2m_ioreq_server || nt ==
> > p2m_map_foreign));
>
> Wouldn't you better leave this alone and add BUG_ON(!level)?
That's an option also. I guess your check is better because it will
trigger for any call with level == 0, while mine would only do if such
call is also to add an entry of type ioreq or foreign.
Thanks, Roger.
_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxxx
https://lists.xenproject.org/mailman/listinfo/xen-devel
|
![]() |
Lists.xenproject.org is hosted with RackSpace, monitoring our |