[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] Implementing shadow mem-access API



Thanks, that clears up a few things.

I'm not sure I understand why I'd have to handle emulation. My assumption was that the process is like this:

page_fault()
{
   ...

    if the guest pagetables aren't the cause, then
        if regs->error_code indicates a violation of p2m access settings
              send the mem_event and return, letting the guest try again.

    ...
}

The Dom0 listener sets the p2m permissions back to rwx, and when the guest retries the instruction, everything is okay (or the existing fault handler code would run and do it's thing).

Also, why do we have both hvmmem_access_t and p2m_access_t? It looks like the domctl sets the former, but the mem_access EPT API uses the latter.

Thanks again!

On Tue, Apr 23, 2013 at 4:49 AM, Tim Deegan <tim@xxxxxxx> wrote:
Hi,

At 17:56 -0400 on 22 Apr (1366653364), Cutter 409 wrote:
> I'm finally to a point where I can start looking at this more closely. I'm
> trying to wrap my head around the shadow code to figure out the right
> course of action.
>
> I'd want HVMOP_set_mem_access to work with both shadow and EPT, so I'd want
> things to work via p2m somehow. I think I understand this part.
>
> * HVMOP_set_mem_access is used to change the p2m_access_t for the target
> page(s). This should already be implemented I think?

Yep.  The shadow code uses the same p2m implementataion as NPT, so that
should all be fine.

> * During propagation, I'll check the p2m map to see if I should mask off
> any permission bits.

Yep.  You'll already be doing a p2m lookup to get the MFN, so you just
need to look at the p2ma as well.

> * On a shadow paging fault, I'll check if the fault was caused by p2m
> permissions, somehow integrating that with the code for read-only guest
> page tables safely.

Yes.  The common case will be handled in _sh_propagate, which is where
the shadow PTE is constructed.  For the rest you'll need to look at the
places where the shadow fault handler calls the emulator and DTRT
(either before the call or in the callbacks that the emulator uses to
access guest memory).

> Questions:
>
> * Just for background, am I correct in my understanding that the log_dirty
> code is used to track which gfns have been written to by the guest, in
> order to speed up migration?

That's right.  It's also used to track which parts of an emulated
framebuffer have been updated, to make VNC connections more efficient.

> * Are multiple shadow tables maintained per domain? Is there one per VCPU?
> One shadow table per guest page table? Is it blown away every time the
> guest changes CR3? I'm having some trouble tracking this down.

There's one set of shadows per domain, shared among all VCPUs.  A given
page of memory may have multiple shadows though, e.g. if it's seen both
as a top-level pagetables and a leaf pagetable.

Shadows are kept around until:
 - it looks like the page is no longer a pagetable;
 - the guest explicitly tells us it's no longer a pagetable; or
 - we need the memory to shadow some other page.

Mostly, a pages's shadow(s) are kept in sync with any changes the guest
makes to the page, by trapping and emulating all writes.  For
performance, we allow some l1 pagetables to be 'out of sync' ('oos' in
the code), letting the guest write to the page directly.  On guest CR3
writes (and other TLB-flush-related activity) we make sure any OOS
shadows are brought up to date.

> * How should I clear/update existing shadow entries after changing the
> p2m_access_t? Can I clear the shadow tables somehow and force everything to
> be repopulated? Is that insane?

It depends how often you're changing the access permissions.
sh_remove_all_mappings() and sh_remove_write_access() will try to flush
mappings of a single MFN from the shadows, but they can be expensive
(e.g. involving a brute-force scan of all shadows) so if you're going to
do a lot of them it may be worth considering batching them up and
calling shadow_blow_tables() once instead.

Cheers,

Tim.


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.