[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH RESEND v5 5/6] xen/arm: Implement hypercall for dirty page tracing



> > > diff --git a/xen/arch/arm/domain.c b/xen/arch/arm/domain.c index
> > > c0b5dd8..0a32301 100644
> > > --- a/xen/arch/arm/domain.c
> > > +++ b/xen/arch/arm/domain.c
> > > @@ -215,6 +215,12 @@ static void ctxt_switch_to(struct vcpu *n)
> > >      WRITE_SYSREG(hcr, HCR_EL2);
> > >      isb();
> > >
> > > +    /* for dirty-page tracing
> > > +     * XXX: how do we consider SMP case?
> > > +     */
> > > +    if ( n->domain->arch.dirty.mode )
> > > +        restore_vlpt(n->domain);
> >
> > This is an interesting question. xen_second is shared between all
> > pcpus, which means that the vlpt is currently only usable from a
> > single physical CPU at a time.
> >
> > Currently the only per-cpu area is the domheap region from 2G-4G. We
> > could steal some address space from the top or bottom of there?
> 
> oh right hmm. Then, how about place the vlpt area starting from 2G
> (between dom heap and xen heap), and let the map_domain_page map the
> domain page starting from the VLPT-end address?
> 
> For this, I think just changing DOMHEAP_VIRT_START to some place (maybe
> 0x88000000) would be suffice.
> 

I just found out that DOMHEAP_VIRT_START should be aligned to first-level
size.
So, 0x88000000 wouldn't work.
So, I'm thinking to use the back of the domheap area and piggybacking the
VLPT
into the dom-heap region. For this, we should use something like the
following layout:

#define DOMHEAP_VIRT_START     _AT(vaddr_t,0x80000000)
#define VIRT_LIN_P2M_START     _AT(vaddr_t,0xf8000000)
#define VIRT_LIN_P2M_END       _AT(vaddr_t<0xffffffff)
#define DOMHEAP_VIRT_END       _AT(vaddr_t,0xffffffff)

where VLPT area is overlapped to domheap. This is necessary since dommap
size is decided
at booting time by the calculation of DOMHEAP_VIRT_END - DOMHEAP_VIRT_START,
and at this time
we have to also allocate the dommap for VLPT. Something else we can do is to
give VLPT for 1G memory
and giving it its own per-cpu page table. But, using 1G for VLPT looks like
some waste of virtual address.
So, I'm thinking of using overlapped VLPT. Could you tell me your opinion
about this?

Jaeyong



_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.