[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-devel] [PATCH v4 03/24] x86: refactor psr: implement main data structures.
>>> On 26.12.16 at 07:56, <yi.y.sun@xxxxxxxxxxxxxxx> wrote: > On 16-12-22 09:13:43, Jan Beulich wrote: >> >>> On 14.12.16 at 05:07, <yi.y.sun@xxxxxxxxxxxxxxx> wrote: >> > So, we need define a socket info data structure, 'struct >> > psr_socket_info' to manage information per socket. It contains a >> > reference count array according to COS ID and a feature list to >> > manage all features enabled. Every entry of the reference count >> > array is used to record how many domains are using the COS registers >> > according to the COS ID. For example, L3 CAT and L2 CAT are enabled, >> > Dom1 uses COS_ID=1 registers of both features to save CBM values, like >> > below. >> > +-------+-------+-------+-----+ >> > | COS 0 | COS 1 | COS 2 | ... | >> > +-------+-------+-------+-----+ >> > L3 CAT | 0x7ff | 0x1ff | ... | ... | >> > +-------+-------+-------+-----+ >> > L2 CAT | 0xff | 0xff | ... | ... | >> > +-------+-------+-------+-----+ >> > >> > If Dom2 has same CBM values, it can reuse these registers which COS_ID=1. >> > That means, both Dom1 and Dom2 use same COS registers(ID=1) to save same >> > L3/L2 values. So, the value ref[1] is 2 which means 2 domains are using >> > COS_ID 1. >> > >> > To manage a feature, we need define a feature node data structure, >> > 'struct feat_node', to manage feature's specific HW info, its callback >> > functions (all feature's specific behaviors are encapsulated into these >> > callback functions), and an array of all COS registers values of this >> > feature. CDP is a special feature which uses two entries of the array >> > for one COS ID. So, the number of CDP COS registers is the half of L3 >> > CAT. E.g. L3 CAT has 16 COS registers, then CDP has 8 COS registers if >> > it is enabled. >> >> The special nature of CDP will make some special handling necessary, >> which may need reflection in data structure arrangement. Would you >> mind spelling out here how CDP handling is intended to work? >> > Yes, CDP has its special handling processes. The main difference has been > described above that CDP has half number of COS registers and uses two > entries. > Because of these, I split CDP out from L3 CAT and implement CDP its own > feature > callback functions from patch 13 to patch 16. You can check them for details. Well, my point was to at least sketch out your (data structure related) intentions in the comment here, to help reviewers (and future readers) understand how the data structures fit that special case. >> > +struct feat_node; >> > + >> > +/* >> > + * This structure defines feature operation callback functions. Every >> > feature >> > + * enabled MUST implement such callback functions and register them to >> > ops. >> > + * >> > + * Feature specific behaviors will be encapsulated into these callback >> > + * functions. Then, the main flows will not be changed when introducing a >> > new >> > + * feature. >> > + */ >> > +struct feat_ops { >> > + /* >> > + * init_feature is used in cpu initialization process to do feature >> > + * specific initialization works. >> > + */ >> > + void (*init_feature)(unsigned int eax, unsigned int ebx, >> > + unsigned int ecx, unsigned int edx, >> > + struct feat_node *feat, >> > + struct psr_socket_info *info); >> > +}; >> >> What is the reason to have a separate structure for this, when you >> don't store a pointer in struct feat_node? If this was inlined there, >> the odd forward declaration of struct feat_node wouldn't be needed >> either. (The same question may apply to struct feat_hw_info.) >> > I just want to make codes be clear. If you prefer inline declaration, I > think I > should change it as below, right? > > struct feat_node { > ...... > struct feat_ops { > ...... > } ops; > struct feat_hw_info { > ...... > } info; > ...... > }; Well, not exactly: The struct <tag> { ... } <name>; wrappers are unnecessary then too. With them kept there indeed would be no big difference between both variants. Jan _______________________________________________ Xen-devel mailing list Xen-devel@xxxxxxxxxxxxx https://lists.xen.org/xen-devel
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |