[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [RFC Design Doc] Add vNVDIMM support for Xen



>>> On 01.02.16 at 06:44, <haozhong.zhang@xxxxxxxxx> wrote:
>  This design treats host NVDIMM devices as ordinary MMIO devices:

Wrt the cachability note earlier on, I assume you're aware that with
the XSA-154 changes we disallow any cachable mappings of MMIO
by default.

>  (1) Dom0 Linux NVDIMM driver is responsible to detect (through NFIT)
>      and drive host NVDIMM devices (implementing block device
>      interface). Namespaces and file systems on host NVDIMM devices
>      are handled by Dom0 Linux as well.
> 
>  (2) QEMU mmap(2) the pmem NVDIMM devices (/dev/pmem0) into its
>      virtual address space (buf).
> 
>  (3) QEMU gets the host physical address of buf, i.e. the host system
>      physical address that is occupied by /dev/pmem0, and calls Xen
>      hypercall XEN_DOMCTL_memory_mapping to map it to a DomU.
> 
>  (ACPI part is described in Section 3.3 later)
> 
>  Above (1)(2) have already been done in current QEMU. Only (3) is
>  needed to implement in QEMU. No change is needed in Xen for address
>  mapping in this design.
> 
>  Open: It seems no system call/ioctl is provided by Linux kernel to
>        get the physical address from a virtual address.
>        /proc/<qemu_pid>/pagemap provides information of mapping from
>        VA to PA. Is it an acceptable solution to let QEMU parse this
>        file to get the physical address?
> 
>  Open: For a large pmem, mmap(2) is very possible to not map all SPA
>        occupied by pmem at the beginning, i.e. QEMU may not be able to
>        get all SPA of pmem from buf (in virtual address space) when
>        calling XEN_DOMCTL_memory_mapping.
>        Can mmap flag MAP_LOCKED or mlock(2) be used to enforce the
>        entire pmem being mmaped?

A fundamental question I have here is: Why does qemu need to
map this at all? It shouldn't itself need to access those ranges,
since the guest is given direct access. It would seem quite a bit
more natural if qemu simply inquired to underlying GFN range(s)
and handed those to Xen for translation to MFNs and mapping
into guest space.

>  I notice that current XEN_DOMCTL_memory_mapping does not make santiy
>  check for the physical address and size passed from caller
>  (QEMU). Can QEMU be always trusted? If not, we would need to make Xen
>  aware of the SPA range of pmem so that it can refuse map physical
>  address in neither the normal ram nor pmem.

I'm not sure what missing sanity checks this is about: The handling
involves two iomem_access_permitted() calls.

> 3.3 Guest ACPI Emulation
> 
> 3.3.1 My Design
> 
>  Guest ACPI emulation is composed of two parts: building guest NFIT
>  and SSDT that defines ACPI namespace devices for NVDIMM, and
>  emulating guest _DSM.
> 
>  (1) Building Guest ACPI Tables
> 
>   This design reuses and extends hvmloader's existing mechanism that
>   loads passthrough ACPI tables from binary files to load NFIT and
>   SSDT tables built by QEMU:
>   1) Because the current QEMU does not building any ACPI tables when
>      it runs as the Xen device model, this design needs to patch QEMU
>      to build NFIT and SSDT (so far only NFIT and SSDT) in this case.
> 
>   2) QEMU copies NFIT and SSDT to the end of guest memory below
>      4G. The guest address and size of those tables are written into
>      xenstore (/local/domain/domid/hvmloader/dm-acpi/{address,length}).
> 
>   3) hvmloader is patched to probe and load device model passthrough
>      ACPI tables from above xenstore keys. The detected ACPI tables
>      are then appended to the end of existing guest ACPI tables just
>      like what current construct_passthrough_tables() does.
> 
>   Reasons for this design are listed below:
>   - NFIT and SSDT in question are quite self-contained, i.e. they do
>     not refer to other ACPI tables and not conflict with existing
>     guest ACPI tables in Xen. Therefore, it is safe to copy them from
>     QEMU and append to existing guest ACPI tables.

How is this not conflicting being guaranteed? In particular I don't
see how tables containing AML code and coming from different
sources won't possibly cause ACPI name space collisions.

> 3.3.3 Alternative Design 2: keeping in Xen
> 
>  Alternative to switching to QEMU, another design would be building
>  NFIT and SSDT in hvmloader or toolstack.
> 
>  The amount and parameters of sub-structures in guest NFIT vary
>  according to different vNVDIMM configurations that can not be decided
>  at compile-time. In contrast, current hvmloader and toolstack can
>  only build static ACPI tables, i.e. their contents are decided
>  statically at compile-time and independent from the guest
>  configuration. In order to build guest NFIT at runtime, this design
>  may take following steps:
>  (1) xl converts NVDIMM configurations in xl.cfg to corresponding QEMU
>      options,
> 
>  (2) QEMU accepts above options, figures out the start SPA range
>      address/size/NVDIMM device handles/..., and writes them in
>      xenstore. No ACPI table is built by QEMU.
> 
>  (3) Either xl or hvmloader reads above parameters from xenstore and
>      builds the NFIT table.
> 
>  For guest SSDT, it would take more work. The ACPI namespace devices
>  are defined in SSDT by AML, so an AML builder would be needed to
>  generate those definitions at runtime.

I'm not sure this last half sentence is true: We do some dynamic
initialization of the pre-generated DSDT already, using the runtime
populated block at ACPI_INFO_PHYSICAL_ADDRESS.

Jan

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.