|
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-devel] [PATCH 4/4] hvmloader: add support to load extra ACPI tables from qemu
On 01/21/2016 12:47 AM, Konrad Rzeszutek Wilk wrote: On Thu, Jan 21, 2016 at 12:25:08AM +0800, Xiao Guangrong wrote:On 01/20/2016 11:47 PM, Konrad Rzeszutek Wilk wrote:On Wed, Jan 20, 2016 at 11:29:55PM +0800, Xiao Guangrong wrote:On 01/20/2016 07:20 PM, Jan Beulich wrote:On 20.01.16 at 12:04, <haozhong.zhang@xxxxxxxxx> wrote:On 01/20/16 01:46, Jan Beulich wrote:On 20.01.16 at 06:31, <haozhong.zhang@xxxxxxxxx> wrote:Secondly, the driver implements a convenient block device interface to let software access areas where NVDIMM devices are mapped. The existing vNVDIMM implementation in QEMU uses this interface. As Linux NVDIMM driver has already done above, why do we bother to reimplement them in Xen?See above; a possibility is that we may need a split model (block layer parts on Dom0, "normal memory" parts in the hypervisor. Iirc the split is being determined by firmware, and hence set in stone by the time OS (or hypervisor) boot starts.For the "normal memory" parts, do you mean parts that map the host NVDIMM device's address space range to the guest? I'm going to implement that part in hypervisor and expose it as a hypercall so that it can be used by QEMU.To answer this I need to have my understanding of the partitioning being done by firmware confirmed: If that's the case, then "normal" means the part that doesn't get exposed as a block device (SSD). In any event there's no correlation to guest exposure here.Firmware does not manage NVDIMM. All the operations of nvdimm are handled by OS. Actually, there are lots of things we should take into account if we move the NVDIMM management to hypervisor:If you remove the block device part and just deal with pmem part then this gets smaller.Yes indeed. But xen can not benefit from NVDIMM BLK, i think it is not a long time plan. :)Also the _DSM operations - I can't see them being in hypervisor - but only in the dom0 - which would have the right software to tickle the correct ioctl on /dev/pmem to do the "management" (carve the NVDIMM, perform an SMART operation, etc).Yes, it is reasonable to put it in dom 0 and it makes management tools happy. Can dom0 receive the interrupt triggered by device hotplug? If yes, we can let dom0 handle all the things like native. If it can not, dom0 can interpret ACPI and fetch the irq info out and tell hypervior to pass the irq to dom0, it is doable? However I don't know if the hypervisor needs to know all the details of an NVDIMM - or just the starting and ending ranges so that when an guest is created and the VT-d is constructed - it can be assured that the ranges are valid. I am not an expert on the P2M code - but I think that would need to be looked at to make sure it is OK with stitching an E820_NVDIMM type "MFN" into an guest PFN. We do better do not use "E820" as it lacks some advantages of ACPI, such as, NUMA, hotplug, lable support (namespace)... _______________________________________________ Xen-devel mailing list Xen-devel@xxxxxxxxxxxxx http://lists.xen.org/xen-devel
|
![]() |
Lists.xenproject.org is hosted with RackSpace, monitoring our |