[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [PATCH] livepatch: do not use .livepatch.funcs section to store internal state
On Thu, Nov 16, 2023 at 01:39:27PM +0100, Jan Beulich wrote: > On 16.11.2023 12:58, Roger Pau Monne wrote: > > --- a/xen/include/public/sysctl.h > > +++ b/xen/include/public/sysctl.h > > @@ -991,10 +991,7 @@ struct livepatch_func { > > uint32_t new_size; > > uint32_t old_size; > > uint8_t version; /* MUST be LIVEPATCH_PAYLOAD_VERSION. */ > > - uint8_t opaque[LIVEPATCH_OPAQUE_SIZE]; > > - uint8_t applied; > > - uint8_t patch_offset; > > - uint8_t _pad[6]; > > + uint8_t _pad[39]; > > Should this be LIVEPATCH_OPAQUE_SIZE+8 and ... Hm, I'm not sure that's any clearer. In fact I think LIVEPATCH_OPAQUE_SIZE shouldn't have leaked into sysctl.h in the first place. If we later want to add more fields and bump the version, isn't it easier to have the padding size clearly specified as a number? > > > --- a/xen/include/xen/livepatch.h > > +++ b/xen/include/xen/livepatch.h > > @@ -13,6 +13,9 @@ struct xen_sysctl_livepatch_op; > > > > #include <xen/elfstructs.h> > > #include <xen/errno.h> /* For -ENOSYS or -EOVERFLOW */ > > + > > +#include <public/sysctl.h> /* For LIVEPATCH_OPAQUE_SIZE */ > > + > > #ifdef CONFIG_LIVEPATCH > > > > /* > > @@ -51,6 +54,12 @@ struct livepatch_symbol { > > bool_t new_symbol; > > }; > > > > +struct livepatch_fstate { > > + unsigned int patch_offset; > > + enum livepatch_func_state applied; > > + uint8_t opaque[LIVEPATCH_OPAQUE_SIZE]; > > ... this use a separate, new (and internal only) constant? Thus also > elimiating the need to include public/sysctl.h above? The size of the buffer here is tied to the buffer size in livepatch_expectation data field, so if using a different size internally we would still need to ensure that the internal size is >= LIVEPATCH_OPAQUE_SIZE, not sure there's much benefit in it. In any case, I would suggest to do this in a followup patch, the content in this patch is IMO enough. Thanks, Roger.
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |