[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [v7][RFC][PATCH 06/13] hvmloader/ram: check if guest memory is out of reserved device memory maps



> From: Tian, Kevin
> Sent: Wednesday, November 19, 2014 4:18 PM
> 
> > From: Jan Beulich [mailto:JBeulich@xxxxxxxx]
> > Sent: Wednesday, November 12, 2014 5:57 PM
> >
> > >>> On 12.11.14 at 10:13, <tiejun.chen@xxxxxxxxx> wrote:
> > > On 2014/11/12 17:02, Jan Beulich wrote:
> > >>>>> On 12.11.14 at 09:45, <tiejun.chen@xxxxxxxxx> wrote:
> > >>>>> #2 flags field in each specific device of new domctl would control
> > >>>>> whether this device need to check/reserve its own RMRR range. But
> its
> > >>>>> not dependent on current device assignment domctl, so the user can
> > use
> > >>>>> them to control which devices need to work as hotplug later,
> separately.
> > >>>>
> > >>>> And this could be left as a second step, in order for what needs to
> > >>>> be done now to not get more complicated that necessary.
> > >>>>
> > >>>
> > >>> Do you mean currently we still rely on the device assignment domctl to
> > >>> provide SBDF? So looks nothing should be changed in our policy.
> > >>
> > >> I can't connect your question to what I said. What I tried to tell you
> > >
> > > Something is misunderstanding to me.
> > >
> > >> was that I don't currently see a need to make this overly complicated:
> > >> Having the option to punch holes for all devices and (by default)
> > >> dealing with just the devices assigned at boot may be sufficient as a
> > >> first step. Yet (repeating just to avoid any misunderstanding) that
> > >> makes things easier only if we decide to require device assignment to
> > >> happen before memory getting populated (since in that case there's
> > >
> > > Here what do you mean, 'if we decide to require device assignment to
> > > happen before memory getting populated'?
> > >
> > > Because -quote-
> > > "
> > > In the present the device assignment is always after memory population.
> > > And I also mentioned previously I double checked this sequence with 
> > > printk.
> > > "
> > >
> > > Or you already plan or deciede to change this sequence?
> >
> > So it is now the 3rd time that I'm telling you that part of your
> > decision making as to which route to follow should be to
> > re-consider whether the current sequence of operations shouldn't
> > be changed. Please also consult with the VT-d maintainers (hint to
> > them: participating in this discussion publicly would be really nice)
> > on _all_ decisions to be made here.
> >
> 

Yang and I did some discussion here. We understand your point to
avoid introducing new interface if we can leverage existing code.
However it's not a trivial effort to move device assignment before 
populating p2m, and there is no other benefit of doing so except
for this purpose. So we'd not suggest this way.

Current option sounds a reasonable one, i.e. passing a list of BDFs
assigned to this VM before populating p2m, and then having 
hypervisor to filter out reserved regions associated with those 
BDFs. This way libxc teaches Xen to create reserved regions once,
and then later the filtered info is returned upon query.

The limitation of wasted memory due to confliction can be
mitigated, and we considered further enhancement can be made
later in libxc that when populating p2m, the reserved regions
can be skipped explicitly at initial p2m creation phase and then 
there would be no waste at all. But this optimization takes some
time and can be built incrementally on current patch and interface, 
post 4.5 release. For now let's focus on the very correctness first.

If you agree, Tiejun will move forward to send another series for 4.5. So
far lots of opens have been closed with your help, but it also means
original v7 needs a serious update then (latest code is in deep discussion
list)

Thanks
Kevin

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.