[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH 1/1] x86/xen: Reset VCPU0 info pointer after shared_info remap



Hi Boris,

Thanks for the feedback.

On 5/7/18, 8:13 AM, "Boris Ostrovsky" <boris.ostrovsky@xxxxxxxxxx> wrote:

    > diff --git a/arch/x86/xen/enlighten_hvm.c b/arch/x86/xen/enlighten_hvm.c
    > index 6b424da1ce75..c78b3e8fb2e5 100644
    > --- a/arch/x86/xen/enlighten_hvm.c
    > +++ b/arch/x86/xen/enlighten_hvm.c
    > @@ -71,6 +71,19 @@ static void __init xen_hvm_init_mem_mapping(void)
    >  {
    >   early_memunmap(HYPERVISOR_shared_info, PAGE_SIZE);
    >   HYPERVISOR_shared_info = __va(PFN_PHYS(shared_info_pfn));
    > +
    > + /*
    > +  * The virtual address of the shared_info page has changed, so
    > +  * the vcpu_info pointer for VCPU 0 is now stale.
    
    Is it "has changed" or "has changed if kaslr is on"?

It's "has changed".  See commit 4ca83dcf4e3bc0c98836dbb97553792ca7ea5429 .  
It's a way to make kaslr work, but it's done regardless of whether it's enabled 
or not.
 
    > +  *
    > +  * The prepare_boot_cpu callback will re-initialize it via
    > +  * xen_vcpu_setup, but we can't rely on that to be called for
    > +  * old Xen versions (xen_have_vector_callback == 0).
    > +  *
    > +  * It is, in any case, bad to have a stale vcpu_info pointer
    > +  * so reset it now.
    > +  */
    > + xen_vcpu_info_reset(0);
    
    
    Why not xen_vcpu_setup(0)?
    
Basically, I wanted to be minimally invasive. xen_vcpu_setup does a little more 
work (tries to do the VCPU placement hypercall), and will be called later in 
any case. So doing just the basic xen_vcpu_info_reset for VCPU 0 seems like the 
best way to do it; it just re-iterates what is done for VCPU 0 earlier in boot, 
which is also a vcpu_info_reset.

Frank



_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxxx
https://lists.xenproject.org/mailman/listinfo/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.