[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH 15/16] Infrastructure for manipulating 3-level event channel pages



>>> On 04.02.13 at 14:45, Wei Liu <wei.liu2@xxxxxxxxxx> wrote:
> On Mon, 2013-02-04 at 11:29 +0000, Jan Beulich wrote:
>> >> 
>> >> So this alone already is up to 16 pages per guest, and hence a
>> >> theoretical maximum of 512k pages, i.e. 2G mapped space.
>> > 
>> > That's given a theoretical 32k guests? Ouch. It also ignores the need
>> > for other global mappings.
>> > 
>> > on the flip side only a minority of domains are likely to be using the
>> > extended scheme, and I expect even those which are would not be using
>> > all 16 pages, so maybe we can fault them in on demand as we bind/unbind
>> > evtchns.
>> > 
>> > Where does 16 come from? How many pages to we end up with at each level
>> > in the new scheme?
>> 
>> Patch 11 defines EVTCHN_MAX_L3_PAGES to be 8, and we've
>> got two of them (pending and mask bits).
>> 
>> > Some levels of the trie are per-VCPU, did you account for that already
>> > in the 2GB?
>> 
>> No, I didn't, as it would only increase the number, and make
>> the math less clear.
>> 
>> >>  The
>> >> global page mapping area, however, is only 1Gb in size on x86-64
>> >> (didn't check ARM at all)...
>> > 
>> > There isn't currently a global page mapping area on 32-bit ARM (I
>> > suppose we have avoided them somehow...) but obviously 2G would be a
>> > problem in a 4GB address space.
>> > 
>> > On ARM we currently have 2G for domheap mappings which I suppose we
>> > would split if we needed a global page map
>> > 
>> > These need to be global so we can deliver evtchns to VCPUs which aren't
>> > running, right? I suppose mapping on demand (other than for a running
>> > VCPU) would be prohibitively expensive.
>> 
>> Likely, especially for high rate ones.
>> 
>> > Could we make this space per-VCPU (or per-domain) by saying that a
>> > domain maps its own evtchn pages plus the required pages from other
>> > domains with which an evtchn is bound? Might be tricky to arrange
>> > though, especially with the per-VCPU pages and affinity changes?
>> 
>> Even without that trickiness it wouldn't work I'm afraid: In various
>> cases we need to be able to raise the events out of context (timer,
>> IRQs from passed through devices).
>> 
>> Jan
> 
> So I come up with following comment on the 3-level registration
> interface (not specific to __map_l3_array() function).
> 
> /*
>  * Note to 3-level event channel users:
>  * Only enable 3-level event channel for Dom0 or driver domains, because
>  * 3-level event channels consumes (16 + nr_vcpus pages) global mapping
>  * area in Xen.
>  */

So you intended to fail the request for other guests? That's fine
with me in principle, but how do you tell a driver domain from an
"ordinary" one?

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.