[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [RFC Design Doc] Add vNVDIMM support for Xen



On 02/03/16 14:20, Andrew Cooper wrote:
> >>>>>  (ACPI part is described in Section 3.3 later)
> >>>>>
> >>>>>  Above (1)(2) have already been done in current QEMU. Only (3) is
> >>>>>  needed to implement in QEMU. No change is needed in Xen for address
> >>>>>  mapping in this design.
> >>>>>
> >>>>>  Open: It seems no system call/ioctl is provided by Linux kernel to
> >>>>>        get the physical address from a virtual address.
> >>>>>        /proc/<qemu_pid>/pagemap provides information of mapping from
> >>>>>        VA to PA. Is it an acceptable solution to let QEMU parse this
> >>>>>        file to get the physical address?
> >>>> Does it work in a non-root scenario?
> >>>>
> >>> Seemingly no, according to Documentation/vm/pagemap.txt in Linux kernel:
> >>> | Since Linux 4.0 only users with the CAP_SYS_ADMIN capability can get 
> >>> PFNs.
> >>> | In 4.0 and 4.1 opens by unprivileged fail with -EPERM.  Starting from
> >>> | 4.2 the PFN field is zeroed if the user does not have CAP_SYS_ADMIN.
> >>> | Reason: information about PFNs helps in exploiting Rowhammer 
> >>> vulnerability.
> >>>
> >>> A possible alternative is to add a new hypercall similar to
> >>> XEN_DOMCTL_memory_mapping but receiving virtual address as the address
> >>> parameter and translating to machine address in the hypervisor.
> >> That might work.
> >>
> >>
> >>>>>  Open: For a large pmem, mmap(2) is very possible to not map all SPA
> >>>>>        occupied by pmem at the beginning, i.e. QEMU may not be able to
> >>>>>        get all SPA of pmem from buf (in virtual address space) when
> >>>>>        calling XEN_DOMCTL_memory_mapping.
> >>>>>        Can mmap flag MAP_LOCKED or mlock(2) be used to enforce the
> >>>>>        entire pmem being mmaped?
> >>>> Ditto
> >>>>
> >>> No. If I take the above alternative for the first open, maybe the new
> >>> hypercall above can inject page faults into dom0 for the unmapped
> >>> virtual address so as to enforce dom0 Linux to create the page
> >>> mapping.
> >> Otherwise you need to use something like the mapcache in QEMU
> >> (xen-mapcache.c), which admittedly, given its complexity, would be best
> >> to avoid.
> >>
> > Definitely not mapcache like things. What I want is something similar to
> > what emulate_gva_to_mfn() in Xen does.
>
> Please not quite like that.  It would restrict this to only working in a
> PV dom0.
>
> MFNs are an implementation detail.

I don't get this point.
What do you mean by 'implementation detail'? Architectural differences?

> Interfaces should take GFNs which
> are consistent logical meaning between PV and HVM domains.
>
> As an introduction,
> http://xenbits.xen.org/gitweb/?p=xen.git;a=blob;f=xen/include/xen/mm.h;h=a795dd6001eff7c5dd942bbaf153e3efa5202318;hb=refs/heads/staging#l8
>
> We also need to consider the Xen side security.  Currently a domain may
> be given privilege to map an MMIO range.  IIRC, this allows the emulator
> domain to make mappings for the guest, and for the guest to make
> mappings itself.  With PMEM, we can't allow a domain to make mappings
> itself because it could end up mapping resources which belong to another
> domain.  We probably need an intermediate level which only permits an
> emulator to make the mappings.
>

agree, this hypercall should not be called by arbitrary domains. Any
existing mechanism in Xen to restrict callers of hypercalls?

> >
> > [...]
> >>>> If we start asking QEMU to build ACPI tables, why should we stop at NFIT
> >>>> and SSDT?
> >>> for easing my development of supporting vNVDIMM in Xen ... I mean
> >>> NFIT and SSDT are the only two tables needed for this purpose and I'm
> >>> afraid to break exiting guests if I completely switch to QEMU for
> >>> guest ACPI tables.
> >> I realize that my words have been a bit confusing. Not /all/ ACPI
> >> tables, just all the tables regarding devices for which QEMU is in
> >> charge (the PCI bus and all devices behind it). Anything related to cpus
> >> and memory (FADT, MADT, etc) would still be left to hvmloader.
> > OK, then it's clear for me. From Jan's reply, at least MCFG is from
> > QEMU. I'll look at whether other PCI related tables are also from QEMU
> > or similar to those in QEMU. If yes, then it looks reasonable to let
> > QEMU generate them.
>
> It is entirely likely that the current split of sources of APCI tables
> is incorrect.  We should also see what can be done about fixing that.
>

How about Jan's comment
| tables should come from qemu for components living in qemu, and from
| hvmloader for components coming from Xen

Thanks,
Haozhong

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.