[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [RFC Design Doc] Add vNVDIMM support for Xen



On Wed, Mar 16, 2016 at 08:55:08PM +0800, Haozhong Zhang wrote:
> Hi Jan and Konrad,
> 
> On 03/04/16 15:30, Haozhong Zhang wrote:
> > Suddenly realize it's unnecessary to let QEMU get SPA ranges of NVDIMM
> > or files on NVDIMM. We can move that work to toolstack and pass SPA
> > ranges got by toolstack to qemu. In this way, no privileged operations
> > (mmap/mlock/...) are needed in QEMU and non-root QEMU should be able to
> > work even with vNVDIMM hotplug in future.
> > 
> 
> As I'm going to let toolstack to get NVDIMM SPA ranges. This can be
> done via dom0 kernel interface and xen hypercalls, and can be
> implemented in different ways. I'm wondering which of the following
> ones is preferred by xen.
> 
> 1. Given
>     * a file descriptor of either a NVDIMM device or a file on NVDIMM, and
>     * domain id and guest MFN where vNVDIMM is going to be.
>    xen toolstack (1) gets it SPA ranges via dom0 kernel interface
>    (e.g. sysfs and ioctl FIEMAP), and (2) calls a hypercall to map
>    above SPA ranges to the given guest MFN of the given domain.
> 
> 2. Or, given the same inputs, we may combine above two steps into a new
>    dom0 system call that (1) gets the SPA ranges, (2) calls xen
>    hypercall to map SPA ranges, and, one step further, (3) returns SPA
>    ranges to userspace (because QEMU needs these addresses to build ACPI).
> 
> The first way does not need to modify dom0 linux kernel, while the
> second requires a new system call. I'm not sure whether xen toolstack
> as a userspace program is considered to be safe to pass the host physical
> address to hypervisor. If not, maybe the second one is better?

Well, the toolstack does it already. (for MMIO ranges of PCIe devices and
such).

I would prefer 1) as it means less kernel code.
> 
> Thanks,
> Haozhong

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.