[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] RFC: xen config changes v4



> > > >> I would prefer to hide it on PAE and x86_64.
> > > >
> > > >
> > > > Okay, as long as it is still _possible_ somehow to configure it.
> > > 
> > > That begs the question, all this just for 32-bit non-PAE ?
> > 
> > There was another reason. Some distros remove the CONFIG_XEN_DOM0 altogether
> > even thought they do enable the rest of the pieces (backends, frontends, 
> > etc).
> > 
> > Which begs the question - why do we care about DOM0 at all.
> > 
> > What we care about is drivers - either frontend or backend. If we want
> > backends and we want PV - then we want to build an kernel that can boot as
> > a normal PV or as an dom0 PV.
> > 
> > Ditto for HVM - if you want to build an kernel that won't do PV but
> > can do backends - we should be able to do that.
> > 
> > Or PVH  - we want an domain that can be an backend (or frontend).
> > 
> > That does mean the "PV" gets broken down further to be concrete
> > pieces and have nothing to do with drivers. 
> > 
> > The idea would be that you would just select four knobs:
> > 
> >  Yes/No Backend PV drivers [and maybe remove the PV part?]
> >  Yes/No Frontend PV drivers [and maybe remove the PV part?]
> >  Yes/No PV support (so utilizing the PV ABI)
> >  Yes/No PVH support (a stricter subset of PV ABI - with less pieces)
> > 
> > The HVM support would automatically be picked if the config had
> > the 'baremetal' type support - like IOAPIC, APIC, ACPI, etc.
> > 
> > So if you said Y, N, N, N, the kernel would only be able to
> > boot in HVM mode but still have pciback, netback, scsiback, blkback, and 
> > usbback.
> > (good for an device backend). And it could be an PAE or non-PAE kernel.
> > 
> > If you said N,Y,Y,Y then it could boot under HVM, PV, PVH, and only
> > have pcifront, netfront, scsifront, blkfront, and usbfront.
> > (not very good for an initial domain).
> > 
> > And so on.
> 
> It makes sense.
> 
> 
> > I hope I hadn't confused the matter?
> 
> Nope, I think it clarifies things, thanks.

Thought it does mean that it would add more #ifdery or cleanups
to the existing drivers so that they can be compiled under
different platforms without any assumptions.

> 
> In this context the issue we were discussing is what to do with the
> other PV interfaces for PV on HVM guests, such as HVMOP_pagetable_dying.
> I think it would be natural to enable them when Frontend PV drivers are
> enabled, without any additional Kconfig options.

I would put this in 'Enlightment support for Xen' - which would be
the basic foundation to make any kernel work under Xen. This would
pull in some _infrastructure_. Regardless of it being a backend
or frontend we need grant ops,event channels, support for migration.

Perhaps add that as a new option under CONFIG_HYPERVISOR. And not
depend on CONFIG_PARAVIRT - as in theory you can have an non-PARAVIRT HVM
guest running with PV drivers (I haven't tried it, but I would think
it can be done?)

Regarding the 'HVMOP_pagetable_drying' - If it is part of foundation
'enlightment for Xen' - then it would be folded in. If it is not, but
the platform looks to be a non-PV kernel (APIC, ACPI, IOPAPIC, MSI,
PCI, etc) then it would be automatically enabled.

BTW, when I think PV kernel - it is an non-APIC, non-ACPI, non.. a lot
of stuff. I did build one like that way back for 3.0 and it was quite
slim. lHm, maybe we should even provide an 'defconfig' just to make sure
we can test this kind of build?

Luis, sorry for hijacking this thread and expanding the scope of this work!

I think it would fantastical to make this work and would help a lot in
the future - but right now it is a bit of complex riddle to untangle!

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.