[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH 4/4] hvmloader: add support to load extra ACPI tables from qemu



On 01/21/16 07:52, Jan Beulich wrote:
> >>> On 21.01.16 at 15:01, <haozhong.zhang@xxxxxxxxx> wrote:
> > On 01/21/16 03:25, Jan Beulich wrote:
> >> >>> On 21.01.16 at 10:10, <guangrong.xiao@xxxxxxxxxxxxxxx> wrote:
> >> > b) some _DSMs control PMEM so you should filter out these kind of _DSMs 
> >> > and
> >> >     handle them in hypervisor.
> >> 
> >> Not if (see above) following the model we currently have in place.
> >>
> > 
> > You mean let dom0 linux evaluates those _DSMs and interact with
> > hypervisor if necessary (e.g. XENPF_mem_hotadd for memory hotplug)?
> 
> Yes.
> 
> >> > c) hypervisor should mange PMEM resource pool and partition it to 
> >> > multiple
> >> >     VMs.
> >> 
> >> Yes.
> >>
> > 
> > But I Still do not quite understand this part: why must pmem resource
> > management and partition be done in hypervisor?
> 
> Because that's where memory management belongs. And PMEM,
> other than PBLK, is just another form of RAM.
> 
> > I mean if we allow the following steps of operations (for example)
> > (1) partition pmem in dom 0
> > (2) get address and size of each partition (part_addr, part_size)
> > (3) call a hypercall like nvdimm_memory_mapping(d, part_addr, part_size, 
> > gpfn) to
> >     map a partition to the address gpfn in dom d.
> > Only the last step requires hypervisor. Would anything be wrong if we
> > allow above operations?
> 
> The main issue is that this would imo be a layering violation. I'm
> sure it can be made work, but that doesn't mean that's the way
> it ought to work.
> 
> Jan
> 

OK, then it makes sense to put them in hypervisor. I'll think about
this and note in the design document.

Thanks,
Haozhong

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.