[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] Accessing Dom0 Physical memory from xen, via direct mappings (PML4:262-271)



At 09:32 -0700 on 13 Mar (1331631148), Shriram Rajagopalan wrote:
> Yep. I am aware of the above issues. As far as contiguity is concerned,
> I was hoping (*naively/lazily*) that if I allocate a huge chunk (1G or so)
> using posix_memalign, it would start at page boundary and also be contiguous
> *most* of the time.

I think you're playing with fire -- you have no way of knowing whether
you were lucky enough to get a contiguous region and if not you'll
silently corrupt memory _and_ pollute your results.

> Well, the buffer acts as a huge log dirty "byte" map (a byte per word).
> I am skipping the reason for doing this huge byte map, for the sake of
> brevity.
> 
> Can I have xen allocate this huge buffer ? (a byte per 8-byte word means
> about
> 128M for a 1G guest). And if I were to have this byte-map per-vcpu, it
> would mean
> 512M worth of RAM, for a 4-vcpu guest.
> 
> Is there a way I could increase the xen heap size to be able to allocate
> this much memory?

IIRC on x64 xen heap is the same as domheap so it already includes all
free space.  Or you could pick a 1GB region at boot time, make sure it
doesn't get given to the memory allocators, and then mark it all as
owned by dom0 in the frametable.

> And how do I map the xen memory in dom0 ? I vaguely remember seeing similar
> code in
> xentrace, but if you could point me in the right direction, it would be
> great.

I don't recall the details, but xentrace isn't that big.  If I were
doing it, I'd use share_xen_pages_with_guest() or similar to allow dom0
to see it, and then map it with ordinary mapping hypercalls.

Tim.

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.