[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH 4/4] hvmloader: add support to load extra ACPI tables from qemu



On 01/26/16 08:57, Jan Beulich wrote:
> >>> On 26.01.16 at 16:30, <haozhong.zhang@xxxxxxxxx> wrote:
> > On 01/26/16 05:44, Jan Beulich wrote:
> >> Interesting. This isn't the usage model I have been thinking about
> >> so far. Having just gone back to the original 0/4 mail, I'm afraid
> >> we're really left guessing, and you guessed differently than I did.
> >> My understanding of the intentions of PMEM so far was that this
> >> is a high-capacity, slower than DRAM but much faster than e.g.
> >> swapping to disk alternative to normal RAM. I.e. the persistent
> >> aspect of it wouldn't matter at all in this case (other than for PBLK,
> >> obviously).
> > 
> > Of course, pmem could be used in the way you thought because of its
> > 'ram' aspect. But I think the more meaningful usage is from its
> > persistent aspect. For example, the implementation of some journal
> > file systems could store logs in pmem rather than the normal ram, so
> > that if a power failure happens before those in-memory logs are
> > completely written to the disk, there would still be chance to restore
> > them from pmem after next booting (rather than abandoning all of
> > them).
> 
> Well, that leaves open how that file system would find its log
> after reboot, or how that log is protected from clobbering by
> another OS booted in between.
>

It would depend on the concrete design of those OS or
applications. This is just an example to show a possible usage of the
persistent aspect.

> >> However, thinking through your usage model I have problems
> >> seeing it work in a reasonable way even with virtualization left
> >> aside: To my knowledge there's no established protocol on how
> >> multiple parties (different versions of the same OS, or even
> >> completely different OSes) would arbitrate using such memory
> >> ranges. And even for a single OS it is, other than for disks (and
> >> hence PBLK), not immediately clear how it would communicate
> >> from one boot to another what information got stored where,
> >> or how it would react to some or all of this storage having
> >> disappeared (just like a disk which got removed, which - unless
> >> it held the boot partition - would normally have pretty little
> >> effect on the OS coming back up).
> > 
> > Label storage area is a persistent area on NVDIMM and can be used to
> > store partitions information. It's not included in pmem (that part
> > that is mapped into the system address space). Instead, it can be only
> > accessed through NVDIMM _DSM method [1]. However, what contents are
> > stored and how they are interpreted are left to software. One way is
> > to follow NVDIMM Namespace Specification [2] to store an array of
> > labels that describe the start address (from the base 0 of pmem) and
> > the size of each partition, which is called as namespace. On Linux,
> > each namespace is exposed as a /dev/pmemXX device.
> 
> According to what I've just read in one of the documents Konrad
> pointed us to, there can be just one PMEM label per DIMM. Unless
> I misread of course...
>

My mistake, only one pmem label per DIMM.

Haozhong

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.