[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-devel] [PATCH v5 1/5] x86/mem_access: put p2m_{get/set}_suppress_ve under CONFIG_HVM
On 9/21/18 7:03 PM, Wei Liu wrote: > On Fri, Sep 21, 2018 at 06:57:39PM +0300, Razvan Cojocaru wrote: >> On 9/21/18 6:54 PM, Wei Liu wrote: >>> They are used by HVM code only. >>> >>> Signed-off-by: Wei Liu <wei.liu2@xxxxxxxxxx> >>> --- >>> v5: new >>> --- >>> xen/arch/x86/mm/mem_access.c | 2 ++ >>> 1 file changed, 2 insertions(+) >>> >>> diff --git a/xen/arch/x86/mm/mem_access.c b/xen/arch/x86/mm/mem_access.c >>> index 2217bda..826c35f 100644 >>> --- a/xen/arch/x86/mm/mem_access.c >>> +++ b/xen/arch/x86/mm/mem_access.c >>> @@ -501,6 +501,7 @@ void arch_p2m_set_access_required(struct domain *d, >>> bool access_required) >>> } >>> } >>> >>> +#ifdef CONFIG_HVM >>> /* >>> * Set/clear the #VE suppress bit for a page. Only available on VMX. >>> */ >>> @@ -600,6 +601,7 @@ int p2m_get_suppress_ve(struct domain *d, gfn_t gfn, >>> bool *suppress_ve, >>> >>> return 0; >>> } >>> +#endif >>> >>> /* >>> * Local variables: >> >> Hello Wei, >> >> I am working on moving these functions to p2m.c (but waiting for >> George's reply on the VMX #VE checks), so if you'd like - and feel >> that's appropriate - I can also put that code under CONFIG_HVM in the >> cleanup patch. Is that something you'd be interested in? > > That would be fine by me, but ... > >> >> Otherwise, Acked-by: Razvan Cojocaru <rcojocaru@xxxxxxxxxxxxxxx>, and >> I'll move the new code as well. >> > > since you ack this, I might as well just commit this patch so you can > rebase. > > In any case, the code that ends up in p2m.c will have to be enclosed in > CONFIG_HVM. > > I have further plan to split code from p2m.c to p2m-hvm.c or alike, but > that can wait until you finish your patches. OK, no problem. Thanks, Razvan _______________________________________________ Xen-devel mailing list Xen-devel@xxxxxxxxxxxxxxxxxxxx https://lists.xenproject.org/mailman/listinfo/xen-devel
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |