[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [RFC] Overview of work required to implement mem_access for PV guests

On 25/11/13 07:49, Aravindh Puthiyaparambil (aravindp) wrote:
> The mem_access APIs only work with HVM guests that run on Intel hardware with 
> EPT support. This effort is to enable it for PV guests that run with shadow 
> page tables. To facilitate this, the following will be done:

Are you sure that this is only Intel with EPT?  It looks to be a HAP
feature, which includes AMD with NPT support.

> 1. A magic page will be created for the mem_access (mem_event) ring buffer 
> during the PV domain creation.

Where is this magic page being created from? This will likely have to be
at the behest of the domain creation flags to avoid making it for the
vast majority of domains which wont want the extra overhead.

> 2. Most of the mem_event / mem_access functions and variable name are HVM 
> specific. Given that I am enabling it for PV; I will change the names to 
> something more generic. This also holds for the mem_access hypercalls, which 
> fall under HVM ops and do_hvm_op(). My plan is to make them a memory op or a 
> domctl.

You cannot remove the hvmops.  That would break the hypervisor ABI.

You can certainly introduce new (more generic) hypercalls, implement the
hvmop ones in terms of the new ones and mark the hvmop ones as
deprecated in the documentation.


> 3. A new shadow option will be added called PG_mem_access. This mode is basic 
> shadow mode with the addition of a table that will track the access 
> permissions of each page in the guest.
> mem_access_tracker[gfmn] = access_type
> If there is a place where I can stash this in an existing structure, please 
> point me at it.
> This will be enabled using xc_shadow_control() before attempting to enable 
> mem_access on a PV guest.
> 4. xc_mem_access_enable/disable(): Change the flow to allow mem_access for PV 
> guests running with PG_mem_access shadow mode.
> 5. xc_domain_set_access_required(): No change required
> 6. xc_(hvm)_set_mem_access(): This API has two modes, one if the start 
> pfn/gmfn is ~0ull, it takes it as a request to set default access. Here we 
> will call shadow_blow_tables() after recording the default access type for 
> the domain. In the mode where it is setting mem_access type for individual 
> gmfns, we will call a function that will drop the shadow for that individual 
> gmfn. I am not sure which function to call. Will sh_remove_all_mappings(gmfn) 
> do the trick? Please advise.
> The other issue here is that in the HVM case we could use 
> xc_hvm_set_mem_access(gfn, nr) and the permissions for the range gfn to 
> gfn+nr would be set. This won't be possible in the PV case as we are actually 
> dealing with mfns and mfn to mfn+nr need not belong to the same guest. But 
> given that setting *all* page access permissions are done implicitly when 
> setting default access, I think we can live with setting page permissions one 
> at a time as they are faulted in.
> 7. xc_(hvm)_get_mem_access(): This will return the access type for gmfn from 
> the mem_access_tracker table.
> 8. In sh_page_fault() perform access checks similar to ept_handle_violation() 
> / hvm_hap_nested_page_fault().
> 9. Hook in to _sh_propagate() and set up the L1 entries based on access 
> permissions. This will be similar to ept_p2m_type_to_flags(). I think I might 
> also have to hook in to the code that emulates page table writes to ensure 
> access permissions are honored there too.  
> Please give feedback on the above.
> Thanks,
> Aravindh
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@xxxxxxxxxxxxx
> http://lists.xen.org/xen-devel

Xen-devel mailing list



Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.