[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-devel] [PATCH v4 06/14] xen: Move the hvm_start_info C representation from libxc to public/xen.h
>>> On 21.03.16 at 18:04, <roger.pau@xxxxxxxxxx> wrote: > On Tue, 15 Mar 2016, Jan Beulich wrote: >> >>> On 14.03.16 at 18:55, <anthony.perard@xxxxxxxxxx> wrote: >> > --- a/xen/include/public/xen.h >> > +++ b/xen/include/public/xen.h >> > @@ -841,6 +841,37 @@ typedef struct start_info start_info_t; >> > */ >> > #define XEN_HVM_START_MAGIC_VALUE 0x336ec578 >> > >> > +#if defined(__i386__) || defined(__x86_64__) >> > +/* C representation of the x86/HVM start info layout. >> > + * >> > + * The canonical definition of this layout is abrove, this is just a way >> > to >> > + * represent the layout described there using C types. >> > + * >> > + * NB: the packed attribute is not really needed, but it helps us enforce >> > + * the fact this this is just a representation, and it might indeed >> > + * be required in the future if there are alignment changes. >> > + */ > > ^ Rationale on why the packed attribute was added. Well, I admit to have overlooked this comment, but I don't see how the packed attribute helps enforce anything. Hence, Anthony, I think together with the attribute that part of the comment should be removed. > I would really like to avoid placing this in public headers, or else > people will think this is the definition of the payload and will forget > that this is just a C representation of it, but the definition is in a > comment just above. I want to avoid the issues we have already seen by the > usage of C structures as definitions of the placement of payloads in > memory. > > If this really has to be there, please guard it with: > > #if defined(__XEN__) || defined(__XEN_TOOLS__) > > So only the Xen kernel/tools can use it. And hvmloader can't. Not a good idea imo. Jan _______________________________________________ Xen-devel mailing list Xen-devel@xxxxxxxxxxxxx http://lists.xen.org/xen-devel
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |