[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [RFC Design Doc] Add vNVDIMM support for Xen



On 03/02/16 13:11, Haozhong Zhang wrote:
> On 02/03/16 12:02, Stefano Stabellini wrote:
>> On Wed, 3 Feb 2016, Haozhong Zhang wrote:
>>> On 02/02/16 17:11, Stefano Stabellini wrote:
>>>> On Mon, 1 Feb 2016, Haozhong Zhang wrote:
> [...]
>>>>>  This design treats host NVDIMM devices as ordinary MMIO devices:
>>>>>  (1) Dom0 Linux NVDIMM driver is responsible to detect (through NFIT)
>>>>>      and drive host NVDIMM devices (implementing block device
>>>>>      interface). Namespaces and file systems on host NVDIMM devices
>>>>>      are handled by Dom0 Linux as well.
>>>>>
>>>>>  (2) QEMU mmap(2) the pmem NVDIMM devices (/dev/pmem0) into its
>>>>>      virtual address space (buf).
>>>>>
>>>>>  (3) QEMU gets the host physical address of buf, i.e. the host system
>>>>>      physical address that is occupied by /dev/pmem0, and calls Xen
>>>>>      hypercall XEN_DOMCTL_memory_mapping to map it to a DomU.
>>>> How is this going to work from a security perspective? Is it going to
>>>> require running QEMU as root in Dom0, which will prevent NVDIMM from
>>>> working by default on Xen? If so, what's the plan?
>>>>
>>> Oh, I forgot to address the non-root qemu issues in this design ...
>>>
>>> The default user:group of /dev/pmem0 is root:disk, and its permission
>>> is rw-rw----. We could lift the others permission to rw, so that
>>> non-root QEMU can mmap /dev/pmem0. But it looks too risky.
>> Yep, too risky.
>>
>>
>>> Or, we can make a file system on /dev/pmem0, create files on it, set
>>> the owner of those files to xen-qemuuser-domid$domid, and then pass
>>> those files to QEMU. In this way, non-root QEMU should be able to
>>> mmap those files.
>> Maybe that would work. Worth adding it to the design, I would like to
>> read more details on it.
>>
>> Also note that QEMU initially runs as root but drops privileges to
>> xen-qemuuser-domid$domid before the guest is started. Initially QEMU
>> *could* mmap /dev/pmem0 while is still running as root, but then it
>> wouldn't work for any devices that need to be mmap'ed at run time
>> (hotplug scenario).
>>
> Thanks for this information. I'll test some experimental code and then
> post a design to address the non-root qemu issue.
>
>>>>>  (ACPI part is described in Section 3.3 later)
>>>>>
>>>>>  Above (1)(2) have already been done in current QEMU. Only (3) is
>>>>>  needed to implement in QEMU. No change is needed in Xen for address
>>>>>  mapping in this design.
>>>>>
>>>>>  Open: It seems no system call/ioctl is provided by Linux kernel to
>>>>>        get the physical address from a virtual address.
>>>>>        /proc/<qemu_pid>/pagemap provides information of mapping from
>>>>>        VA to PA. Is it an acceptable solution to let QEMU parse this
>>>>>        file to get the physical address?
>>>> Does it work in a non-root scenario?
>>>>
>>> Seemingly no, according to Documentation/vm/pagemap.txt in Linux kernel:
>>> | Since Linux 4.0 only users with the CAP_SYS_ADMIN capability can get PFNs.
>>> | In 4.0 and 4.1 opens by unprivileged fail with -EPERM.  Starting from
>>> | 4.2 the PFN field is zeroed if the user does not have CAP_SYS_ADMIN.
>>> | Reason: information about PFNs helps in exploiting Rowhammer 
>>> vulnerability.
>>>
>>> A possible alternative is to add a new hypercall similar to
>>> XEN_DOMCTL_memory_mapping but receiving virtual address as the address
>>> parameter and translating to machine address in the hypervisor.
>> That might work.
>>
>>
>>>>>  Open: For a large pmem, mmap(2) is very possible to not map all SPA
>>>>>        occupied by pmem at the beginning, i.e. QEMU may not be able to
>>>>>        get all SPA of pmem from buf (in virtual address space) when
>>>>>        calling XEN_DOMCTL_memory_mapping.
>>>>>        Can mmap flag MAP_LOCKED or mlock(2) be used to enforce the
>>>>>        entire pmem being mmaped?
>>>> Ditto
>>>>
>>> No. If I take the above alternative for the first open, maybe the new
>>> hypercall above can inject page faults into dom0 for the unmapped
>>> virtual address so as to enforce dom0 Linux to create the page
>>> mapping.
>> Otherwise you need to use something like the mapcache in QEMU
>> (xen-mapcache.c), which admittedly, given its complexity, would be best
>> to avoid.
>>
> Definitely not mapcache like things. What I want is something similar to
> what emulate_gva_to_mfn() in Xen does.

Please not quite like that.  It would restrict this to only working in a
PV dom0.

MFNs are an implementation detail.  Interfaces should take GFNs which
are consistent logical meaning between PV and HVM domains.

As an introduction,
http://xenbits.xen.org/gitweb/?p=xen.git;a=blob;f=xen/include/xen/mm.h;h=a795dd6001eff7c5dd942bbaf153e3efa5202318;hb=refs/heads/staging#l8

We also need to consider the Xen side security.  Currently a domain may
be given privilege to map an MMIO range.  IIRC, this allows the emulator
domain to make mappings for the guest, and for the guest to make
mappings itself.  With PMEM, we can't allow a domain to make mappings
itself because it could end up mapping resources which belong to another
domain.  We probably need an intermediate level which only permits an
emulator to make the mappings.

>
> [...]
>>>> If we start asking QEMU to build ACPI tables, why should we stop at NFIT
>>>> and SSDT?
>>> for easing my development of supporting vNVDIMM in Xen ... I mean
>>> NFIT and SSDT are the only two tables needed for this purpose and I'm
>>> afraid to break exiting guests if I completely switch to QEMU for
>>> guest ACPI tables.
>> I realize that my words have been a bit confusing. Not /all/ ACPI
>> tables, just all the tables regarding devices for which QEMU is in
>> charge (the PCI bus and all devices behind it). Anything related to cpus
>> and memory (FADT, MADT, etc) would still be left to hvmloader.
> OK, then it's clear for me. From Jan's reply, at least MCFG is from
> QEMU. I'll look at whether other PCI related tables are also from QEMU
> or similar to those in QEMU. If yes, then it looks reasonable to let
> QEMU generate them.

It is entirely likely that the current split of sources of APCI tables
is incorrect.  We should also see what can be done about fixing that.

~Andrew

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.