[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [RFC KERNEL PATCH 0/2] Add Dom0 NVDIMM support for Xen



On 10/14/16 04:16 -0600, Jan Beulich wrote:
On 13.10.16 at 17:46, <haozhong.zhang@xxxxxxxxx> wrote:
On 10/13/16 03:08 -0600, Jan Beulich wrote:
On 13.10.16 at 10:53, <haozhong.zhang@xxxxxxxxx> wrote:
On 10/13/16 02:34 -0600, Jan Beulich wrote:
On 12.10.16 at 18:19, <dan.j.williams@xxxxxxxxx> wrote:
On Wed, Oct 12, 2016 at 9:01 AM, Jan Beulich <JBeulich@xxxxxxxx> wrote:
On 12.10.16 at 17:42, <dan.j.williams@xxxxxxxxx> wrote:
On Wed, Oct 12, 2016 at 8:39 AM, Jan Beulich <JBeulich@xxxxxxxx> wrote:
On 12.10.16 at 16:58, <haozhong.zhang@xxxxxxxxx> wrote:
On 10/12/16 05:32 -0600, Jan Beulich wrote:
On 12.10.16 at 12:33, <haozhong.zhang@xxxxxxxxx> wrote:
The layout is shown as the following diagram.

+---------------+-----------+-------+----------+--------------+
| whatever used | Partition | Super | Reserved | /dev/pmem0p1 |
|  by kernel    |   Table   | Block | for Xen  |              |
+---------------+-----------+-------+----------+--------------+
                \_____________________ _______________________/
                                  V
                             /dev/pmem0

I have to admit that I dislike this, for not being OS-agnostic.
Neither should there be any Xen-specific region, nor should the
"whatever used by kernel" one be restricted to just Linux. What
I could see is an OS-reserved area ahead of the partition table,
the exact usage of which depends on which OS is currently
running (and in the Xen case this might be both Xen _and_ the
Dom0 kernel, arbitrated by a tbd protocol). After all, when
running under Xen, the Dom0 may not have a need for as much
control data as it has when running on bare hardware, for it
controlling less (if any) of the actual memory ranges when Xen
is present.


Isn't this OS-reserved area still not OS-agnostic, as it requires OS
to know where the reserved area is?  Or do you mean it's not if it's
defined by a protocol that is accepted by all OSes?

The latter - we clearly won't get away without some agreement on
where to retrieve position and size of this area. I was simply
assuming that such a protocol already exists.


No, we should not mix the struct page reservation that the Dom0 kernel
may actively use with the Xen reservation that the Dom0 kernel does
not consume.  Explain again what is wrong with the partition approach?

Not sure what was unclear in my previous reply. I don't think there
should be apriori knowledge of whether Xen is (going to be) used on
a system, and even if it gets used, but just occasionally, it would
(apart from the abstract considerations already given) be a waste
of resources to set something aside that could be used for other
purposes while Xen is not running. Static partitioning should only be
needed for persistent data.

The reservation needs to be persistent / static even if the data is
volatile, as is the case with struct page, because we can't have the
size of the device change depending on use.  So, from the aspect of
wasting space while Xen is not in use, both partitions and the
intrinsic reservation approach suffer the same problem. Setting that
aside I don't want to mix 2 different use cases into the same
reservation.

Then you didn't understand what I've said: I certainly didn't mean
the reservation to vary from a device perspective. However, when
Xen is in use I don't see why part of that static reservation couldn't
be used by Xen, and another part by the Dom0 kernel. The kernel
obviously would need to ask the hypervisor how much of the space
is left, and where that area starts.


I think Dan means that there should be a clear separation between
reservations for different usages (kernel/xen/...). The libnvdimm
driver is for the linux kernel and only needs to maintain the
reservation for kernel functionality. For others including xen/dm/...,
if they want reservation for their own purpose, they should maintain
their own reservations out of libnvdimm driver and avoid bothering the
libnvdimm driver (e.g. add specific handling in libnvdimm driver).

IIUC, one existing example is device-mapper device (dm) which needs to
reserve on-device area for its own meta-data. Its choice is to store
the meta-data on the block device (/dev/pmemN) provided by the
libnvdimm driver.

I think we can do the similar for Xen, like to lay another pseudo
device on /dev/pmem and do the reservation, like 2. in my previous
reply.

Well, my opinion certainly doesn't count much here, but I continue to
consider this a bad idea. For entities like drivers it may well be
appropriate, but I think there ought to be an independent concept
of "OS reserved", and in the Xen case this could then be shared
between hypervisor and Dom0 kernel.

No such independent concept seems exist right now. It may be hard to
define such concept, because it's hard to know the common requirements
(e.g. size/alignment/...)  from ALL OSes. Making each component to
maintain its own reservation in its own way seems more flexible.

Or if we were to consider Dom0
"just a guest", things should even be the other way around: Xen gets
all of the OS reserved space, and Dom0 needs something custom.


Sure, it's possible to implement the driver in a way that if the
driver finds it runs on Xen, then it just leaves the OS reserved area
for Xen and itself goes to other reservation. Are there some
differences in practice from the way that Xen goes to other
reservation that makes we have to do so? If not and it's possible to
not touch the existing libnvdimm driver, why don't we just use the
existing libnvdimm driver and let xen driver make the reservation on
what the libnvdimm driver provides?

It continues to feel like you're trying to make the second step before
the first: You talk about implementation, whereas I talk about the
concept that should underly your implementation. One could view it
the way that how said driver works with Xen in the picture should
have been decided before it got implemented, and then the question
of whether it would be okay to leave the existing implementation
alone would not have appeared in the first place.

With that, I can't reasonably answer your questions.


Ideally, Xen should step in first to do whatever needed for the
resource management (e.g. taking all OS-reserved space), and then let
other kernel components work on the remaining resource (e.g. libnvdimm
driver to use other reserved space). However, the libnvdimm driver was
merged in the kernel long before I started to design and implement Xen
vNVDIMM. The implementation of the ideal design above needs to modify
the existing driver which is objected by its maintainer. Therefore, I
looked to the alternative design and implementation.

Thanks,
Haozhong

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
https://lists.xen.org/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.