[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH v2 1/2] docs: update hyperlaunch device tree



Hi,

On 03/08/2023 11:44, Daniel P. Smith wrote:
----------------
+This node describes a boot module loaded by the boot loader. A ``module`` node
+will often appear repeatedly and will require a unique and DTB compliant name
+for each instance.

For clarification, do you mean module@<unit>? or something different?

The compatible property is required to identify that the
+node is a ``module`` node, the type of boot module, and what it represents.
-This node describes a boot module loaded by the boot loader. The required
-compatible property follows the format: module,<type> where type can be
-“kernel”, “ramdisk”, “device-tree”, “microcode”, “xsm-policy” or “config”. In
-the case the module is a multiboot module, the additional property string
-“multiboot,module” may be present. One of two properties is required and
-identifies how to locate the module. They are the mb-index, used for multiboot
-modules, and the module-addr for memory address based location.
+Depending on the type of boot module, the ``module`` node will require either a
+``module-index`` or ``module-addr`` property must be present. They provide the
+boot module specific way of locating the boot module in memory.
+
+Properties
+""""""""""
compatible
    This identifies what the module is and thus what the hypervisor
    should use the module for during domain construction. Required.
-mb-index
-  This identifies the index for this module in the multiboot module chain.
+  Format: "module,<module type>"[, "module,<locating type>"]
+          module type: kernel, ramdisk, device-tree, microcode, xsm-policy,
+                       config

All but the last are pretty self-explanatory. Can you clarify what the last one is?

+
+          locating type: index, addr

It is not clear to me why you need to specify the locating type in the compatible. Would not it be sufficient to check the presence of either module-index or module-addr?

If you still want both, then which property belong to which compatible?

+
+module-index
+  This identifies the index for this module when in a module chain.
    Required for multiboot environments.

'multiboot' is somewhat overloaded as we also use it to describe the binding in the device-tree. So I would clarify which multiboot you are referring to.

I assume this is x86 multiboot. That said, my knowledge about it is limited. How would a user be able to find the index to write down here?

+ Format: Integer, e.g. <0>
+
  module-addr
    This identifies where in memory this module is located. Required for
    non-multiboot environments.
+ Format: DTB Reg <start size>, e.g. <0x0 0x20000>

What is the expected number of cells?

+
  bootargs
    This is used to provide the boot params to kernel modules.
+ Format: String, e.g. "ro quiet"
+
  .. note::  The bootargs property is intended for situations where the same 
kernel multiboot module is used for more than one domain.


I realize this wasn't added in your patch. But it is not entirely clear what this means given that an admin may still want to use 'bootargs' even if there is a single kernel.

+
+Example Configuration
+---------------------
+
+Below are two example device tree definitions for the hypervisor node. The
+first is an example of a multiboot-based configuration for x86 and the second
+is a module-based configuration for Arm.
+
+Multiboot x86 Configuration:
+""""""""""""""""""""""""""""
+
+::
+
+    /dts-v1/;
+
+    / {
+        chosen {
+            hypervisor {
+                compatible = "hypervisor,xen", "xen,x86";
+
+                dom0 {
+                    compatible = "xen,domain";
+
+                    domid = <0>;

This is actually a good example where '0' would become confusing because the name of the domain is 'dom0' so one could mistakenly assume that it means domid 0 will be assigned.

+
+                    role = <9>;

Reading this, I wonder if using number is actually a good idea. While this is machine friendly, this is not human friendly.

The most human friendly interface would be to use string, but I understand this is more complex to parse. So maybe we could use some pre-processing (like Linux does) to ease the creation of the hyperlaunch DT.

Bertrand, Stefano, what do you think?

+                    mode = <12>;
+
+                    domain-uuid = [B3 FB 98 FB 8F 9F 67 A3 8A 6E 62 5A 09 13 
F0 8C];
+
+                    cpus = <1>;
+                    memory = "1024M";
+
+                    kernel {
+                        compatible = "module,kernel", "module,index";
+                        module-index = <1>;
+                    };
+
+                    initrd {
+                        compatible = "module,ramdisk", "module,index";
+                        module-index = <2>;
+                    };
+                };
+
+                dom1 {
+                    compatible = "xen,domain";
+                    domid = <1>;
+                    role = <0>;
+                    capability = <1>;
+                    mode = <12>;
+                    domain-uuid = [C2 5D 91 CB 60 4B 45 75 89 04 FF 09 64 54 
1A 74];
+                    cpus = <1>;
+                    memory = "1024M";
+
+                    kernel {
+                        compatible = "module,kernel", "module,index";
+                        module-index = <3>;
+                        bootargs = "console=hvc0 earlyprintk=xen root=/dev/ram0 
rw";
+                    };
+
+                    initrd {
+                        compatible = "module,ramdisk", "module,index";
+                        module-index = <4>;
+                    };
+                };
+            };
+        };
+    };
+
+
+
+The multiboot modules supplied when using the above config would be, in order:
+
+* (the above config, compiled)
+* kernel for PVH unbounded domain
+* ramdisk for PVH unbounded domain
+* kernel for PVH guest domain
+* ramdisk for PVH guest domain
+
+Module Arm Configuration:
+"""""""""""""""""""""""""
+
+::
+
+    /dts-v1/;
+
+    / {
+        chosen {
+            hypervisor {
+                compatible = “hypervisor,xen”
+
+                // Configuration container
+                config {
+                    compatible = "xen,config";
+
+                    module {
+                        compatible = "module,xsm-policy";
+                        module-addr = <0x0000ff00 0x80>;
+
+                    };
+                };
+
+                // Unbounded Domain definition
+                dom0 {
+                    compatible = "xen,domain";
+
+                    domid = <0>;
+
+                    role = <9>;
+
+                    mode = <12>; /* 64 BIT, PVH */

Arm guest have similar feature compare to PVH guest but they are strictly not the same. So we have been trying to avoid using the term on Arm.

I would prefer if we continue to avoid using the word 'PVH' to describe Arm. Lets just call them 'Arm guest'.

+
+                    memory = <0x0 0x20000>;

Here you use the integer version, but AFAICT this wasn't described in the binding above.

+                    security-id = “dom0_t”;
+
+                    module {
+                        compatible = "module,kernel";
+                        module-addr = <0x0000ff00 0x80>;

Reading the binding, this is suggest that the first cell is the start address and the second is the size. Cells are 32-bits. So what if you have a 64-bit address?

For 'reg' property, the DT addressed this by using #address-cells and #size-cells to indicate the number of cells for each.

+                        bootargs = "console=hvc0";
+                    };
+                    module {
+                        compatible = "module,ramdisk";
+                        module-addr = <0x0000ff00 0x80>;
+                    };
+                };
+            };
+        };
+    };
+
+The modules that would be supplied when using the above config would be:
+
+* (the above config, compiled into hardware tree)
+* XSM policy
+* kernel for unbounded domain
+* ramdisk for unbounded domain
+* kernel for guest domain
+* ramdisk for guest domain
+
+The hypervisor device tree would be compiled into the hardware device tree and
+provided to Xen using the standard method currently in use.

It is not clear what you mean by 'compiled in'. Do you mean the /hypervisor node will be present in the device-tree provided to Xen?

The remaining
+modules would need to be loaded in the respective addresses specified in the
+`module-addr` property.

Cheers,

--
Julien Grall



 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.