[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH 4/4] hvmloader: add support to load extra ACPI tables from qemu



On 01/20/16 10:13, Konrad Rzeszutek Wilk wrote:
> On Wed, Jan 20, 2016 at 10:53:10PM +0800, Haozhong Zhang wrote:
> > On 01/20/16 14:45, Andrew Cooper wrote:
> > > On 20/01/16 14:29, Stefano Stabellini wrote:
> > > > On Wed, 20 Jan 2016, Andrew Cooper wrote:
> > > >> On 20/01/16 10:36, Xiao Guangrong wrote:
> > > >>> Hi,
> > > >>>
> > > >>> On 01/20/2016 06:15 PM, Haozhong Zhang wrote:
> > > >>>
> > > >>>> CCing QEMU vNVDIMM maintainer: Xiao Guangrong
> > > >>>>
> > > >>>>> Conceptually, an NVDIMM is just like a fast SSD which is linearly
> > > >>>>> mapped
> > > >>>>> into memory.  I am still on the dom0 side of this fence.
> > > >>>>>
> > > >>>>> The real question is whether it is possible to take an NVDIMM, 
> > > >>>>> split it
> > > >>>>> in half, give each half to two different guests (with appropriate 
> > > >>>>> NFIT
> > > >>>>> tables) and that be sufficient for the guests to just work.
> > > >>>>>
> > > >>>> Yes, one NVDIMM device can be split into multiple parts and assigned
> > > >>>> to different guests, and QEMU is responsible to maintain virtual NFIT
> > > >>>> tables for each part.
> > > >>>>
> > > >>>>> Either way, it needs to be a toolstack policy decision as to how to
> > > >>>>> split the resource.
> > > >>> Currently, we are using NVDIMM as a block device and a DAX-based
> > > >>> filesystem
> > > >>> is created upon it in Linux so that file-related accesses directly 
> > > >>> reach
> > > >>> the NVDIMM device.
> > > >>>
> > > >>> In KVM, If the NVDIMM device need to be shared by different VMs, we 
> > > >>> can
> > > >>> create multiple files on the DAX-based filesystem and assign the file 
> > > >>> to
> > > >>> each VMs. In the future, we can enable namespace (partition-like) for
> > > >>> PMEM
> > > >>> memory and assign the namespace to each VMs (current Linux driver uses
> > > >>> the
> > > >>> whole PMEM as a single namespace).
> > > >>>
> > > >>> I think it is not a easy work to let Xen hypervisor recognize NVDIMM
> > > >>> device
> > > >>> and manager NVDIMM resource.
> > > >>>
> > > >>> Thanks!
> > > >>>
> > > >> The more I see about this, the more sure I am that we want to keep it 
> > > >> as
> > > >> a block device managed by dom0.
> > > >>
> > > >> In the case of the DAX-based filesystem, I presume files are not
> > > >> necessarily contiguous.  I also presume that this is worked around by
> > > >> permuting the mapping of the virtual NVDIMM such that the it appears as
> > > >> a contiguous block of addresses to the guest?
> > > >>
> > > >> Today in Xen, Qemu already has the ability to create mappings in the
> > > >> guest's address space, e.g. to map PCI device BARs.  I don't see a
> > > >> conceptual difference here, although the security/permission model
> > > >> certainly is more complicated.
> > > > I imagine that mmap'ing  these /dev/pmemXX devices require root
> > > > privileges, does it not?
> > > 
> > > I presume it does, although mmap()ing a file on a DAX filesystem will
> > > work in the standard POSIX way.
> > > 
> > > Neither of these are sufficient however.  That gets Qemu a mapping of
> > > the NVDIMM, not the guest.  Something, one way or another, has to turn
> > > this into appropriate add-to-phymap hypercalls.
> > >
> > 
> > Yes, those hypercalls are what I'm going to add.
> 
> Why?
> 
> What you need (in a rought hand-wave way) is to:
>  - mount /dev/pmem0
>  - mmap the file on /dev/pmem0 FS
>  - walk the VMA for the file - extract the MFN (machien frame numbers)

Can this step be done by QEMU? Or does linux kernel provide some
approach for the userspace to do the translation?

Haozhong

>  - feed those frame numbers to xc_memory_mapping hypercall. The
>    guest pfns would be contingous.
>    Example: say the E820_NVDIMM starts at 8GB->16GB, so an 8GB file on
>    /dev/pmem0 FS - the guest pfns are 0x200000 upward.
> 
>    However the MFNs may be discontingous as the NVDIMM could be an
>    1TB - and the 8GB file is scattered all over.
> 
> I believe that is all you would need to do?
> > 
> > Haozhong
> > 
> > > >
> > > > I wouldn't encourage the introduction of anything else that requires
> > > > root privileges in QEMU. With QEMU running as non-root by default in
> > > > 4.7, the feature will not be available unless users explicitly ask to
> > > > run QEMU as root (which they shouldn't really).
> > > 
> > > This isn't how design works.
> > > 
> > > First, design a feature in an architecturally correct way, and then
> > > design an security policy to fit.  (note, both before implement happens).
> > > 
> > > We should not stunt design based on an existing implementation.  In
> > > particular, if design shows that being a root only feature is the only
> > > sane way of doing this, it should be a root only feature.  (I hope this
> > > is not the case, but it shouldn't cloud the judgement of a design).
> > > 
> > > ~Andrew
> > > 
> > > _______________________________________________
> > > Xen-devel mailing list
> > > Xen-devel@xxxxxxxxxxxxx
> > > http://lists.xen.org/xen-devel
> > 
> > _______________________________________________
> > Xen-devel mailing list
> > Xen-devel@xxxxxxxxxxxxx
> > http://lists.xen.org/xen-devel
> 
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@xxxxxxxxxxxxx
> http://lists.xen.org/xen-devel

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.