[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-devel] HVMLite / PVHv2 - using x86 EFI boot entry
On Wed, Apr 13, 2016 at 10:01:18PM +0200, Luis R. Rodriguez wrote: > On Wed, Apr 13, 2016 at 03:22:23PM -0400, Konrad Rzeszutek Wilk wrote: > > On Wed, Apr 13, 2016 at 09:14:08PM +0200, Luis R. Rodriguez wrote: > > > On Wed, Apr 13, 2016 at 03:02:26PM -0400, Konrad Rzeszutek Wilk wrote: > > > > On Wed, Apr 13, 2016 at 08:50:10PM +0200, Luis R. Rodriguez wrote: > > > > > On Wed, Apr 13, 2016 at 11:54:29AM +0200, Roger Pau Monné wrote: > > > > > > On Fri, Apr 08, 2016 at 11:58:54PM +0200, Luis R. Rodriguez wrote: > > > > > > > OK thanks for the clarification -- still no custom entries for > > > > > > > Xen! > > > > > > > We should strive for that, at the very least. > > > > > > > > > > > > > > You do have a point about the legacy stuff. There are two options > > > > > > > there: > > > > > > > > > > > > > > * Fold legacy support under HVMLite -- which seems to be what we > > > > > > > currently want to do (we should evaluate the implications and > > > > > > > requirements here for that); or > > > > > > > > > > > > I'm not following here. What does it mean to fold legacy support > > > > > > under > > > > > > HVMlite? HVMlite doesn't have any legacy hardware, and that's the > > > > > > issue when > > > > > > it comes to using native Linux entry points. Linux might expect > > > > > > some legacy > > > > > > PC hardware to be always present, which is not true for HVMlite. > > > > > > > > > > > > Could you please clarify this point? > > > > > > > > > > It seems there is a confusion on terms used. By folding legacy > > > > > support under > > > > > HVMLite I meant folding legacy PV path (classic PV with PV > > > > > interfaces) under > > > > > HVMlite. > > > > > > > > Ewww. > > > > > > Probably a confusion again on terms, by the above I meant to say what you > > > seem > > > to be indicating below, which is to keep old PV guest support with PV > > > interfaces > > > using a new shiny entry. > > > > > > Or are we really going to nuke full support for old PV guests ? > > > > Please re-read my email. The hypervisor is not going to nuke it. Linux > > will stop using them - and hence the pvops will be obsolete. > > I meant remove old PV guests support from Linux. You made it crystal clear > that the hypervisor will keep legacy PV support. > > Are we going to remove old PV guest support from Linux upstream long term ? Yes! > If so then HVMLite design need not be concerned with supporting legacy crap. Exactly. > > > > > > I got the impression that if we wanted to remove the old PV path we > > > > > had to see > > > > > if we can address old classic PV x86 guests through HVMlite, > > > > > otherwise we'd > > > > > have to live with the old PV path for the long term. > > > > > > > > No. We need to deprecate the PV paths - and the agreement we hammered > > > > out > > > > with the x86 maintainers was that once PVH/HVMLite is stable the clock > > > > would start ticking on PV (pvops) life. All the big users of PV Linux > > > > were told in persons to prep them for this. > > > > > > That's nice. *How* that is done is what we are determining here. > > > > What is being discussed is how PVH/HVMLite is suppose to bootup. > > Or the merits of different bootup paths. > > That's part of it... > > > Unless you are saying that you want to be the maintainer of pvops > > and want to extend the life of pvops in Linux and are trying to make > > it work under HVMLite? > > Huh? If you look at pvops commits you'll see I've been responsible for > most of the pvops removal already, my ongoing patches should show that > my goal is to streamline this further. > > I want to clarify now then what our exist path is, do we need to care > about legacy crap ? exist? Existing? And by 'legacy crap' you mean 'pvops' - then the answer is no. The big existing use-case of pvops is to boot Linux as initial domain. If we can swap it over to PVH/HVMLite then that frees us from having to use pvops. _______________________________________________ Xen-devel mailing list Xen-devel@xxxxxxxxxxxxx http://lists.xen.org/xen-devel
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |