[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH 4/4] hvmloader: add support to load extra ACPI tables from qemu



>>> On 18.01.16 at 01:52, <haozhong.zhang@xxxxxxxxx> wrote:
> On 01/15/16 10:10, Jan Beulich wrote:
>> >>> On 29.12.15 at 12:31, <haozhong.zhang@xxxxxxxxx> wrote:
>> > NVDIMM devices are detected and configured by software through
>> > ACPI. Currently, QEMU maintains ACPI tables of vNVDIMM devices. This
>> > patch extends the existing mechanism in hvmloader of loading passthrough
>> > ACPI tables to load extra ACPI tables built by QEMU.
>> 
>> Mechanically the patch looks okay, but whether it's actually needed
>> depends on whether indeed we want NV RAM managed in qemu
>> instead of in the hypervisor (where imo it belongs); I didn' see any
>> reply yet to that same comment of mine made (iirc) in the context
>> of another patch.
> 
> One purpose of this patch series is to provide vNVDIMM backed by host
> NVDIMM devices. It requires some drivers to detect and manage host
> NVDIMM devices (including parsing ACPI, managing labels, etc.) that
> are not trivial, so I leave this work to the dom0 linux. Current Linux
> kernel abstract NVDIMM devices as block devices (/dev/pmemXX). QEMU
> then mmaps them into certain range of dom0's address space and asks
> Xen hypervisor to map that range of address space to a domU.
> 
> However, there are two problems in this Xen patch series and the
> corresponding QEMU patch series, which may require further
> changes in hypervisor and/or toolstack.
> 
> (1) The QEMU patches use xc_hvm_map_io_range_to_ioreq_server() to map
>     the host NVDIMM to domU, which results VMEXIT for every guest
>     read/write to the corresponding vNVDIMM devices. I'm going to find
>     a way to passthrough the address space range of host NVDIMM to a
>     guest domU (similarly to what xen-pt in QEMU uses)
>     
> (2) Xen currently does not check whether the address that QEMU asks to
>     map to domU is really within the host NVDIMM address
>     space. Therefore, Xen hypervisor needs a way to decide the host
>     NVDIMM address space which can be done by parsing ACPI NFIT
>     tables.

These problems are a pretty direct result of the management of
NVDIMM not being done by the hypervisor.

Stating what qemu currently does is, I'm afraid, not really serving
the purpose of hashing out whether the management of NVDIMM,
just like that of "normal" RAM, wouldn't better be done by the
hypervisor. In fact so far I haven't seen any rationale (other than
the desire to share code with KVM) for the presently chosen
solution. Yet in KVM qemu is - afaict - much more of an integral part
of the hypervisor than it is in the Xen case (and even there core
management of the memory is left to the kernel, i.e. what
constitutes the core hypervisor there).

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.