[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH v2 1/2] docs: update hyperlaunch device tree


  • To: Jan Beulich <jbeulich@xxxxxxxx>
  • From: "Daniel P. Smith" <dpsmith@xxxxxxxxxxxxxxxxxxxx>
  • Date: Thu, 3 Aug 2023 13:17:09 -0400
  • Arc-authentication-results: i=1; mx.zohomail.com; dkim=pass header.i=apertussolutions.com; spf=pass smtp.mailfrom=dpsmith@xxxxxxxxxxxxxxxxxxxx; dmarc=pass header.from=<dpsmith@xxxxxxxxxxxxxxxxxxxx>
  • Arc-message-signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; t=1691083032; h=Content-Type:Content-Transfer-Encoding:Cc:Date:From:In-Reply-To:MIME-Version:Message-ID:References:Subject:To; bh=AmEf/+ki1UJIP4coeTooEdy1afIkuTFZimjus8hcjZc=; b=N2X5Yd9Mx4hU2ZaPbKyUtia56cA9xJPr8mpLDByH8dmw3g+t95jn8zLwRzXJL+zJTVCqbVIbfd2xUuNLU+im25EweqVk7cvfgaWOrqDw/BwZIrvEH2t59/sMKH91LY/+wikBVYl6YZdusncEG9AkrxwNlxKjiDtDXT7A9NQDqDE=
  • Arc-seal: i=1; a=rsa-sha256; t=1691083032; cv=none; d=zohomail.com; s=zohoarc; b=gz4cRN/UxO1k9s/AeHLVWvWE7UGRKSMCOvgsJYYJMPiD7a7TPQqI5n4ZTpVu1uU1XvLCHgV+Maty7ymb0pL7PslJP7bgT7gtkYsStEz4mBbxGbwXHhN4aAZLtm1ZEhKWNFZub1aXbDq08EoyTcQUBqphT8Zi1zQeEeAMob03nw4=
  • Cc: Andrew Cooper <andrew.cooper3@xxxxxxxxxx>, George Dunlap <george.dunlap@xxxxxxxxxx>, Julien Grall <julien@xxxxxxx>, Stefano Stabellini <sstabellini@xxxxxxxxxx>, Wei Liu <wl@xxxxxxx>, xen-devel@xxxxxxxxxxxxxxxxxxxx
  • Delivery-date: Thu, 03 Aug 2023 17:17:25 +0000
  • List-id: Xen developer discussion <xen-devel.lists.xenproject.org>

On 8/3/23 08:19, Jan Beulich wrote:
On 03.08.2023 12:44, Daniel P. Smith wrote:
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -332,6 +332,15 @@ M: Nick Rosbrook <rosbrookn@xxxxxxxxx>
  S:    Maintained
  F:    tools/golang
+HYPERLAUNCH
+M:     Daniel P. Smith <dpsmith@xxxxxxxxxxxxxxxxxxxx>
+M:     Christopher Clark <christopher.w.clark@xxxxxxxxx>
+W:     https://wiki.xenproject.org/wiki/Hyperlaunch
+S:     Supported
+F:     docs/design/launch/hyperlaunch.rst
+F:     docs/design/launch/hyperlaunch-devicetree.rst
+F:     xen/common/domain-builder/

I would generally suggest that maintainership changes come in a separate
patch. Furthermore aiui lots of stuff is going to be moved from elsewhere,
and such code may better stay under its original maintainership (unless
it was agreed that it would shift). So initially maybe best to name the
original maintainers here under M: and add the two of you with R: ?

I can do this as a separate patch and mark it as fix for `d4f3125f1b docs/designs/launch: Hyperlaunch design document` where Christopher and I are the original authors of the only existing files covered under this new MAINTAINERSHIP entry currently.

As far as code moving here, the dom0less rebranding proposal called for an additional MAINTAINERS section titled HYPERLAUNCH DOM0LESS COMPATIBILITY that would retain the maintainers (or new ones if Arm wanted to propose others) from the ARM section.

The purpose of putting Christopher and I at the top are for several reasons,

- The code in v1 was conglomerations of reuses/relocated code and a substantial amount of new code around it.

- As mentioned regarding the HYPERLAUNCH DOM0LESS COMPATIBILITY section, there may be paths below HYPERLAUNCH that are owned by others, but ultimately we conceived, designed, and created the capability. So it falls on us to ensure anything done in sub-feature, doesn't break or violate the larger design we sought to achieve while also not letting it fall back on THE REST.

I also don't think it makes sense to include a not-yet-populated path
here; who knows what this is going to change to by the time things get
committed.

Well, if my proposed plan is executed as I suggested, hopefully there would be a series very soon that would move the dom0less device tree parsing under that path. Now to be inline with the above and address your concern, as the HYPERLAUNCH DOM0LESS COMPATIBILITY section is added to cover the file(s) the series adds, the HYPERLAUNCH section could then be updated with the top level path.

v/r,
dps



 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.