|
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-devel] [PATCH v3] x86: psr: support co-exist features' values setting
On Tue, Oct 10, 2017 at 09:19:10AM +0000, Yi Sun wrote:
> It changes the memebers in 'cos_write_info' to transfer the feature array,
> feature properties array and value array. Then, we can write all features
> values on the cos id into MSRs.
>
> Because multiple features may co-exist, we need handle all features to write
> values of them into a COS register with new COS ID. E.g:
> 1. L3 CAT and L2 CAT co-exist.
> 2. Dom1 and Dom2 share the same COS ID (2). The L3 CAT CBM of Dom1 is 0x1ff,
> the L2 CAT CBM of Dom1 is 0x1f.
> 3. User wants to change L2 CBM of Dom1 to be 0xf. Because COS ID 2 is
> used by Dom2 too, we have to pick a new COS ID 3. The values of Dom1 on
> COS ID 3 are all default values as below:
> ---------
> | COS 3 |
> ---------
> L3 CAT | 0x7ff |
> ---------
> L2 CAT | 0xff |
> ---------
> 4. After setting, the L3 CAT CBM value of Dom1 should be kept and the new L2
> CAT CBM is set. So, the values on COS ID 3 should be below.
> ---------
> | COS 3 |
> ---------
> L3 CAT | 0x1ff |
> ---------
> L2 CAT | 0xf |
> ---------
>
> Signed-off-by: Yi Sun <yi.y.sun@xxxxxxxxxxxxxxx>
LGTM, just one nit.
Reviewed-by: Roger Pau Monné <roger.pau@xxxxxxxxxx>
> @@ -1137,30 +1159,19 @@ static int write_psr_msrs(unsigned int socket,
> unsigned int cos,
> const uint32_t val[], unsigned int array_len,
> enum psr_feat_type feat_type)
> {
> - int ret;
> struct psr_socket_info *info = get_socket_info(socket);
> struct cos_write_info data =
> {
> .cos = cos,
> - .feature = info->features[feat_type],
> - .props = feat_props[feat_type],
> + .features = info->features,
> + .val = val,
> + .array_len = array_len,
> + .result = 0,
This last line is not needed (result will be set to 0 already).
Thanks, Roger.
_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
https://lists.xen.org/xen-devel
|
![]() |
Lists.xenproject.org is hosted with RackSpace, monitoring our |