[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [RFC PATCH 68/84] page_alloc: actually do the mapping and unmapping on xenheap.



Hi,

On 9/26/19 2:03 PM, hongyax@xxxxxxxxxx wrote:
On 26/09/2019 13:24, Julien Grall wrote:
Hi,

On 9/26/19 12:18 PM, hongyax@xxxxxxxxxx wrote:
On 26/09/2019 11:39, Julien Grall wrote:
Hi,

NIT Title: Please remove full stop.

On 9/26/19 10:46 AM, hongyax@xxxxxxxxxx wrote:
From: Hongyan Xia <hongyax@xxxxxxxxxx>

Please provide a description of what/why you are doing this in the commit message.

Also, IIRC, x86 always have !CONFIG_SEPARATE_XENHEAP. So can you explain why the path with separate xenheap is also modified?


Signed-off-by: Hongyan Xia <hongyax@xxxxxxxxxx>
---
  xen/common/page_alloc.c | 18 ++++++++++++++++--
  1 file changed, 16 insertions(+), 2 deletions(-)

diff --git a/xen/common/page_alloc.c b/xen/common/page_alloc.c
index 7cb1bd368b..4ec6299ba8 100644
--- a/xen/common/page_alloc.c
+++ b/xen/common/page_alloc.c
@@ -2143,6 +2143,7 @@ void init_xenheap_pages(paddr_t ps, paddr_t pe)
  void *alloc_xenheap_pages(unsigned int order, unsigned int memflags)
  {
      struct page_info *pg;
+    void *ret;
      ASSERT(!in_irq());
@@ -2151,7 +2152,10 @@ void *alloc_xenheap_pages(unsigned int order, unsigned int memflags)
      if ( unlikely(pg == NULL) )
          return NULL;
-    memguard_unguard_range(page_to_virt(pg), 1 << (order + PAGE_SHIFT));
+    ret = page_to_virt(pg);
+    memguard_unguard_range(ret, 1 << (order + PAGE_SHIFT));
+    map_pages_to_xen((unsigned long)ret, page_to_mfn(pg),
+                     1UL << order, PAGE_HYPERVISOR);

As mentioned earlier on for Arm, xenheap will always be mapped. So unless you have plan to tackle the Arm side as well, we should make sure that the behavior is not changed for Arm.

I can add an #ifdef for x86. Although I think if the Arm code is correct, this should still not break things. It breaks if a xenheap access is made even before allocation or after free, which I think is a bug.

Correctness is a matter of perspective ;). xenheap is already map on Arm and therefore trying to map it again is considered as an error. I think this is a valid behavior because if you try to map twice then it likely means you may have to unmap later on.

Good point. Thanks. Will an ifdef do the job?

I would suggest to provide helpers so you can document in a single place why this is necessary and avoid to add #ifdefery everywhere.

Also, do we expect similar bits in other part of the common code? If yes, I would suggest to add those helpers in the header. If not, page_alloc.c should be enough.

Regarding the config name, I would rather not use CONFIG_X86 but use a new arch-agnostic one. Maybe CONFIG_DIRECTMAP_NOT_ALWAYS_MAPPED? (probably too verbose).



Furthermore, xenheap is using superpage (2MB, 1GB) mapping at the moment. We completely forbid shattering superpage because they are not trivial to deal with.

In short, if you wanted to unmap part it, then you would need to shatter the page. Shattering superpage required to follow a specific sequence (called break-before-make) that will go through an invalid mapping. We need to be careful as another processor may access the superpage at the same time.

It may be possible to use only 4KB mapping for the xenheap, but that's need to be investigated first.

The series is intended for x86 which then starts with no xenheap mappings. If one call to map_pages_to_xen() maps the first half of a superpage and a second one maps the remaining, will the second call merge them into an actual superpage mapping? Also, do we do a xenheap allocation and then only free part of it in some cases? If both answers are hopefully no, then shattering won't happen.

For avoidance of doubt, I was describing how Arm works. For any x86 specific question, then Jan/Andrew are the best point of contact (I saw Jan already answered).

The main point is any common code should be able to work for any existing architecture (ATM x86, arm)

Cheers,

--
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxxx
https://lists.xenproject.org/mailman/listinfo/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.