[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: Proposal for consistent Kconfig usage by the hypervisor build system



Hi Jan,

Apologies for the late reply.

On 12/01/2023 16:52, Jan Beulich wrote:
(re-sending with REST on Cc, as requested at the community call)

At present we use a mix of Makefile and Kconfig driven capability checks for
tool chain components involved in the building of the hypervisor.  What approach
is used where is in some part a result of the relatively late introduction of
Kconfig into the build system, but in other places also simply a result of
different taste of different contributors.  Switching to a uniform model,
however, has drawbacks as well:
  - A uniformly Makefile based model is not in line with Linux, where Kconfig is
    actually coming from (at least as far as we're concerned; there may be
    earlier origins).  This model is also being disliked by some community
    members.
  - A uniformly Kconfig based model suffers from a weakness of Kconfig in that
    dependent options are silently turned off when dependencies aren't met.  
This
    has the undesirable effect that a carefully crafted .config may be silently
    converted to one with features turned off which were intended to be on.
    While this could be deemed expected behavior when a dependency is also an
    option which was selected by the person configuring the hypervisor, it
    certainly can be surprising when the dependency is an auto-detected tool
    chain capability.  Furthermore there's no automatic re-running of kconfig if
    any part of the tool chain changed.  (Despite knowing of this in principle,
    I've still been hit by this more than once in the past: If one rebuilds a
    tree which wasn't touched for a while, and if some time has already passed
    since the updating to the newer component, one may not immediately make the
    connection.)

Therefore I'd like to propose that we use an intermediate model: Detected tool
chain capabilities (and alike) may only be used to control optimization (i.e.
including their use as dependencies for optimization controls) and to establish
the defaults of options.  They may not be used to control functionality, i.e.
they may in particular not be specified as a dependency of an option controlling
functionality.  This way unless defaults were overridden things will build, and
non-default settings will be honored (albeit potentially resulting in a build
failure).

For example

config AS_VMX
        def_bool $(as-instr,vmcall)

would be okay (as long as we have fallback code to deal with the case of too
old an assembler; raising the baseline there is a separate topic), but instead
of what we have currently

config XEN_SHSTK
        bool "Supervisor Shadow Stacks"
        default HAS_AS_CET_SS

would be the way to go.

I think your intermediate model makes sense.


It was additionally suggested that, for a better user experience, unmet
dependencies which are known to result in build failures (which at times may be
hard to associate back with the original cause) would be re-checked by Makefile
based logic, leading to an early build failure with a comprehensible error
message.  Personally I'd prefer this to be just warnings (first and foremost to
avoid failing the build just because of a broken or stale check), but I can see
that they might be overlooked when there's a lot of other output.

If we wanted the Makefile to check the available features, then I would prefer an early error rather than warning. That said...

In any event
we may want to try to figure an approach which would make sufficiently sure that
Makefile and Kconfig checks don't go out of sync.

... this is indeed a concern. How incomprehensible would the error be if we don't check it in the Makefile?

Cheers,

--
Julien Grall



 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.