[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] Question about partitioning shared cache in Xen



2015-01-14 11:29 GMT-05:00 Jan Beulich <JBeulich@xxxxxxxx>:
>>>> On 14.01.15 at 16:27, <xumengpanda@xxxxxxxxx> wrote:
>> 2015-01-14 10:02 GMT-05:00 Jan Beulich <JBeulich@xxxxxxxx>:
>>>>>> On 14.01.15 at 15:45, <xumengpanda@xxxxxxxxx> wrote:
>>>> Yes. I try to use the bits [A16, A12] to isolate different colors in a
>>>> shared cache. A 2MB 16-way associate shared cache uses [A16, A6] to
>>>> index the cache set. Because page size is 4KB, we have page frame
>>>> number's bits [A16, A12] overlapped with the bits used to index a
>>>> shared cache's cache set. So we can control those [A16, A12] bits to
>>>> control where the page should be placed. (The wiki pages about page
>>>> coloring is here: http://en.wikipedia.org/wiki/Cache_coloring)
>>>
>>> But the majority of allocations done for guests would be as 2M or
>>> 1G pages,
>>
>> First, I want to confirm my understanding is not incorrect: When Xen
>> allocate memory pages to guests, it current allocate a bunch of memory
>> pages at one time to guests. That's why you said the majority
>> allocation would be 2MB or 1GB. But the size of one memory page used
>> by guests is still 4KB. Am I correct?
>
> Yes.

So when Xen allocate memory to a PV guest with 256MB memory and 4KB
page size (i.e., 2^16 memory pages), Xen will allocate 2^16 continuous
memory pages to this guest since the maximum continuous memory pages
Xen allocates to PV guest is 1024*1024.
Although the 2^16 memory pages are continuous, Xen still need to fill
this guest's p2m table in a page-by-page fashion, which means each
element in the guest's p2m table is the page frame number of one 4KB
page. Right?

>
>> But can we allocate one memory page to guests until the guests have
>> enough pages?
>
> We can, but that's inefficient for TLB usage and page table lookup.

IMHO, that's true for any case when we have a smaller page size. In my
understanding, Xen manages guests' memory, say p2m table or m2p table,
in the granularity of 4KB page. In other words, the page size in Xen
is still 4KB. (Please correct me if I'm wrong.)
So if the number of pages a guest requests does not change, (which
means the size of page is 4KB,) the TLB usage should be same.
If the page size in Xen is larger than 4KB, the TLB usage will
increase for sure if we force Xen to use 4KB page size.

OK. Suppose TLB usage and page table lookup becomes inefficient
because of the page coloring mechanism. I totally agree that
non-continuous memory may hurt the performance of a guest when the
guest runs alone. However, the shared-cache partition can make the
performance of a guest more stable and not easy to be influenced by
other guests. Briefly speaking, I'm trying to make the running time of
the workload in a guest more deterministic and robust to other guests'
interference.

For those application, like the control program in automobile, that
must produce results within a deadline, a deterministic execution time
is more important than an execution time that is smaller in most cases
but may be very large in worst case.

>
>> I find in arch_setup_meminit() function in tools/libxc/xc_dom_x86.c
>> allocate memory pages depending on if the dom->superpages is true.
>> Can we add a if-else to allocate one page at each time to the guest
>> instead of allocate many pages in one time?
>
> That's for PV guests, which (by default) can't use 2M (not to speak of
> 1G) pages anyway.

Right now, I'm only looking at the PV guests and try to have some
measurements of the cache partitioning mechanisms on the PV guests. I
want to first show the benefits and cost of the page coloring
mechanisms in Xen and then may explore the other types of guests.

However, even for the PV guests, I'm struggling with the error I
mentioned above. :-(

Thank you very much for your time and help!
Hope you could give me some advice where I should investigate to fix the issue.

Best,

Meng

-- 


-----------
Meng Xu
PhD Student in Computer and Information Science
University of Pennsylvania

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.