[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH 4/4] hvmloader: add support to load extra ACPI tables from qemu

On Wed, 20 Jan 2016, Andrew Cooper wrote:
> On 20/01/16 10:36, Xiao Guangrong wrote:
> >
> > Hi,
> >
> > On 01/20/2016 06:15 PM, Haozhong Zhang wrote:
> >
> >> CCing QEMU vNVDIMM maintainer: Xiao Guangrong
> >>
> >>> Conceptually, an NVDIMM is just like a fast SSD which is linearly
> >>> mapped
> >>> into memory.  I am still on the dom0 side of this fence.
> >>>
> >>> The real question is whether it is possible to take an NVDIMM, split it
> >>> in half, give each half to two different guests (with appropriate NFIT
> >>> tables) and that be sufficient for the guests to just work.
> >>>
> >>
> >> Yes, one NVDIMM device can be split into multiple parts and assigned
> >> to different guests, and QEMU is responsible to maintain virtual NFIT
> >> tables for each part.
> >>
> >>> Either way, it needs to be a toolstack policy decision as to how to
> >>> split the resource.
> >
> > Currently, we are using NVDIMM as a block device and a DAX-based
> > filesystem
> > is created upon it in Linux so that file-related accesses directly reach
> > the NVDIMM device.
> >
> > In KVM, If the NVDIMM device need to be shared by different VMs, we can
> > create multiple files on the DAX-based filesystem and assign the file to
> > each VMs. In the future, we can enable namespace (partition-like) for
> > PMEM
> > memory and assign the namespace to each VMs (current Linux driver uses
> > the
> > whole PMEM as a single namespace).
> >
> > I think it is not a easy work to let Xen hypervisor recognize NVDIMM
> > device
> > and manager NVDIMM resource.
> >
> > Thanks!
> >
> The more I see about this, the more sure I am that we want to keep it as
> a block device managed by dom0.
> In the case of the DAX-based filesystem, I presume files are not
> necessarily contiguous.  I also presume that this is worked around by
> permuting the mapping of the virtual NVDIMM such that the it appears as
> a contiguous block of addresses to the guest?
> Today in Xen, Qemu already has the ability to create mappings in the
> guest's address space, e.g. to map PCI device BARs.  I don't see a
> conceptual difference here, although the security/permission model
> certainly is more complicated.

I imagine that mmap'ing  these /dev/pmemXX devices require root
privileges, does it not?

I wouldn't encourage the introduction of anything else that requires
root privileges in QEMU. With QEMU running as non-root by default in
4.7, the feature will not be available unless users explicitly ask to
run QEMU as root (which they shouldn't really).

Xen-devel mailing list



Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.