[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH v3 0/8] Follow-up static shared memory PART I


  • To: Michal Orzel <michal.orzel@xxxxxxx>, <xen-devel@xxxxxxxxxxxxxxxxxxxx>
  • From: Penny Zheng <penny.zheng@xxxxxxx>
  • Date: Mon, 11 Sep 2023 12:14:52 +0800
  • Arc-authentication-results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 40.67.248.234) smtp.rcpttodomain=amd.com smtp.mailfrom=arm.com; dmarc=pass (p=none sp=none pct=100) action=none header.from=arm.com; dkim=none (message not signed); arc=none
  • Arc-message-signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=D1ZJui2Jmux++Nj0HJAVH9Qx/6TLBaErSeyJMq3z8DU=; b=a5WFoAy/s26RXTIe7pXMY7B2XRjGwbbQ8a3hZ949+x/TtyBDyWXVGMHep65o71hQxfM8qqQ7cbJiUGJXZXdx0xvD6yPeiOQ6YcOEG/ep19Vr07xT7sKId2TC1AlkPVFUgvf3XR6a+8Y4IiRNrng7m+vNo5ZPNFrBwSvUDC8ODJsJivocuoCeIFgFve290hmJWuI8O+x1SS0IKTRWIvHGIqX+tptMc6bIGNbJY1E+wyMSY6SOXxok/fi5UeiSHxYj9XoHJgOoCpKYuRy5ZZlLXmFmEjz6ZIKsq+h5BL1bLKrIkKkaXsBDJJd4vxCT5N3NntCkXOi9/s51NPlApgtd/w==
  • Arc-seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=k+DLEgDqn950mLUF4Kg5NRzRt8/4ArDrX1a11AIsdfqeT7YSGJ8d9TFd/LUZGGL08uYvgy+FDeuhkxN8QjD/fSHvz2hH51CqjkTq6doooQToDCiFijTF40RpSeQSDRNJdYOtnAnlRYkov+GkmEhA6VGPzacke7QiWfY4UzNAxoNFM0ExDmFkYCvjLbHCQvqDTEb0MaI7QtpGvM6nx8p261Tz3jyTwkGbVdBS4KoOYZ3S0KS7u8VNCSt8938nITzvDQQrzXpLBqA2NlbHZLvqfpZAzJdFdCSgDZnU0WhfoGuSpPieeB0aRjElxAoRLl2Aeha7vQUBjlcf3N2ipQKbQg==
  • Cc: <wei.chen@xxxxxxx>, Stefano Stabellini <sstabellini@xxxxxxxxxx>, "Julien Grall" <julien@xxxxxxx>, Bertrand Marquis <bertrand.marquis@xxxxxxx>, Volodymyr Babchuk <Volodymyr_Babchuk@xxxxxxxx>
  • Delivery-date: Mon, 11 Sep 2023 04:18:56 +0000
  • List-id: Xen developer discussion <xen-devel.lists.xenproject.org>
  • Nodisclaimer: true

Hi Michal

Sorry for the delayed response, caught up in internal release lately. :\

On 2023/8/22 16:57, Michal Orzel wrote:
Hi Penny,

On 22/08/2023 07:32, Penny Zheng wrote:


Hi, michal

On 2023/8/21 18:49, Michal Orzel wrote:
Hi Penny,

On 21/08/2023 06:00, Penny Zheng wrote:


There are some unsolving issues on current 4.17 static shared memory
feature[1], including:
- In order to avoid keeping growing 'membank', having the shared memory
info in separate structures is preferred.
- Missing implementation on having the host address optional in
"xen,shared-mem" property
- Removing static shared memory from extended regions
- Missing reference release on foreign superpage
- Missing "xen,offset" feature, which is introduced in Linux DOC[2]

All above objects have been divided into two parts to complete. And this
patch serie is PART I.

[1] https://lore.kernel.org/all/20220908135513.1800511-1-Penny.Zheng@xxxxxxx/
[2] 
https://www.kernel.org/doc/Documentation/devicetree/bindings/reserved-memory/xen%2Cshared-memory.txt

It looks like there is a problem with the changes introduced in this series.
The gitlab static shared memory tests failed:
https://gitlab.com/xen-project/patchew/xen/-/pipelines/973985190
No Xen logs meaning the failure occurred before serial console initialization.

Now, I would like to share some observations after playing around with the 
current static shared mem code today.
1) Static shared memory region is advertised to a domain by creating a child 
node under reserved-memory.
/reserved-memory is nothing but a way to carve out a region from the normal 
memory specified in /memory node.
For me, such regions should be described in domain's /memory node as well. This 
is not the case at the moment
for static shm unlike to other sub-nodes of /reserved-memory (present in host 
dtb) for which Xen creates separate
/memory nodes.


Hmm, correct me if I'm wrong,
If we describe twice in domain's /memory node too, it will be treated as
normal memory, then any application could use it. The reason why we put
static shm under /reserved-memory is that we only hope special driver,
like static shm linux driver, could access it.

If you track down in make_memory_node(), only memory range that is
reserved for device (or firmware) will be described twice as normal
memory in Dom0. Memory like static shm, will get passed.

Reserved memory is a region of RAM (not MMIO) carved out for a special purpose 
which can be used by a driver for e.g. shared dma pool.
Therefore, such region shall be described both under /memory (used to present 
the total RAM and reserved memory is in RAM)
and under /reserved-memory. The OS parses /memory and then parses 
/reserved-memory to exclude the regions from normal usage
(there is also no-map property to tell OS not to create virtual mapping). So 
you do not need to worry about OS making use of something we marked as reserved.
This is exactly what Xen does if there are regions described as reserved in the 
host dtb:
1. Xen parses host dtb, adds reserved regions into bootinfo.reserved_mem so 
that it will not be used e.g. for allocator
2. While copying nodes from host dtb, Xen copies reserved memory nodes to dom0 
dtb and only maps the regions in p2m without permitting iomem access
3. Xen creates another /memory node to contain the reserved memory ranges

I guess static shm is not exception to this flow. It is part of RAM suited for 
memory sharing.


Understood!!! thanks for the detailed explanation.
I've created a new commit "xen/arm: fix duplicate /reserved-memory node in Dom0" to fix this problem in v4[1], plz feel free to review.


2) Domain dtb parsing issue with two /reserved-memory nodes present.
In case there is a /reserved-memory node already present in the host dtb, Xen 
would create yet another /reserved-memory
node for the static shm (to be observed in case of dom0). This is a bug as 
there can be only one /reserved-memory node.
This leads to an error when dumping with dtc and leads to a shm node not being 
visible to a domain (guest OS relies on
a presence of a single /reserved-memory node). The issue is because in 
make_resv_memory_node(), you are not checking if
such node already exists.

Yes, you're true.
In Dom0, we could see two /reserved-memory nodes. I think, if there is a
/reserved-memory node already present in the host dtb, we shall reserve
it in kinfo for make_resv_memory_node().
This way you will only take the reference of a region but what about all the 
properties, node names, etc. that you need to copy?
This is why Xen first copies the reserved memory nodes from host dtb and then 
adds ranges to /memory.
In our shm case, we would need to insert the shm node into existing reserved 
memory node. This is a bit tricky as you can no longer
make use of fdt_{begin,end}_node and instead use the helpers operating on 
offsets.


I've also created a new commit "xen/arm: create another /memory node for static shm" to fix this problem in v4[1] too, plz feel free to review.

[1] https://lore.kernel.org/xen-devel/20230911040442.2541398-1-Penny.Zheng@xxxxxxx/

~Michal




 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.