[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH RFC v1 00/13] Introduce HMV without dm and new boot ABI



El 24/06/15 a les 13.52, Boris Ostrovsky ha escrit:
> On 06/24/2015 06:14 AM, Roger Pau Monné wrote:
>> El 24/06/15 a les 12.05, Jan Beulich ha escrit:
>>>>>> On 24.06.15 at 11:47, <roger.pau@xxxxxxxxxx> wrote:
>>>> What needs to be done (ordered by priority):
>>>>
>>>>   - Clean up the patches, this patch series was done in less than a
>>>> week.
>>>>   - Finish the boot ABI (this would also be needed for PVH anyway).
>>>>   - Convert the rest of xc_dom_*loaders in order to use the physical
>>>>     entry point when present, right now xc_dom_elfloader is the only
>>>> one
>>>>     usable with HVMlite. This is quite trivial (see patch 10, it's a 4
>>>>     LOC change).
>>>>   - Dom0 support.
>>>>   - Migration.
>>>>   - PCI pass-through.
>>>>
>>>> IMHO this is what we agreed to do with PVH, make it an HVM guest
>>>> without
>>>> a device model and without the emulated devices inside of Xen.
>>>> Sooner or
>>>> later we would need to make that change anyway in order to properly
>>>> integrate PVH into Xen, and we get a bunch of new features for free as
>>>> compared to PVH.
>>>>
>>>> I don't think of this as "throw PVH out of the window and start
>>>> something completely new from scratch", we are going to reuse some of
>>>> the code paths used by PVH inside of Xen. From a guest POV the changes
>>>> needed to move from PVH into HVMlite are regarding the boot ABI only,
>>>> which we already agreed that should be changed anyway.
>>> I have to admit that I'm having a hard time making myself a clear
>>> picture of what the intention now is, namely with feature freeze
>>> being in about 2.5 weeks: If we assume that this series gets ready
>>> in time, should we drop Boris' 32-bit support patches? Would then
>>> be unfortunate if the series here didn't get ready.
>> TBH I'm not going to make any promises of this being ready before the
>> 4.6 feature freeze, not until I get some feedback from the tools
>> maintainers regarding the libxc changes to unify the PV and HVM domain
>> creation paths.
> 
> FWIW, I gave this a quick spin on Monday and crashed the hypervisor on a
> NULL pointer right away in vapic code. Which, I assume, is not
> surprising since we are not supposed to be there in the first place.
> 
> I'll try it again later today (I was out yesterday), maybe I messed
> something up.

Yes, feature disabling is still not 100% done I'm afraid. For example if
your hw supports vAPIC it will be enabled anyway, which can then lead to
all kinds of trouble. As said, this is very initial and I've only tested
it on one Nehalem box which doesn't have vAPIC.

>>
>>> Otoh I don't think this and Boris' code conflict, and what we got in
>>> the tree PVH-wise is kind of a mess right now anyway, so adding to
>>> it just a few more bits (actually getting rid of some fixme-s, i.e.
>>> reducing messiness), so I'd be inclined to take the rest of Boris'
>>> series once ready, and if the series here gets ready too it could
>>> then also go in. Which would then mean for someone (perhaps
>>> after 4.6 was branched) to clean up any no longer necessary
>>> PVH special cases, unifying things towards what we seem to now
>>> call HVMlite.
>> I'm not against merging the 32bit support series for PVH, but I'm
>> certainly not going to invest time in adding 32bit PVH entry points to
>> any OSes.
> 
> What about Tim's proposal
> (http://lists.xen.org/archives/html/xen-devel/2014-12/msg00596.html)?
> Can this work be made part of it? At least, make it extendable to that?

Yes, the aim of this work is to address some of the points expressed in
that email, mainly merge PVH into HVM. But as we have already spoken,
the entry point of HVMlite or whatever we call it is going to be
different from the traditional PV/PVH entry point.

Roger.


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.