[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH v10 05/25] x86: refactor psr: L3 CAT: implement CPU init and free flow.



>>> On 06.04.17 at 12:02, <yi.y.sun@xxxxxxxxxxxxxxx> wrote:
> On 17-04-06 03:34:27, Jan Beulich wrote:
>> >>> On 06.04.17 at 11:22, <yi.y.sun@xxxxxxxxxxxxxxx> wrote:
>> > On 17-04-06 02:32:04, Jan Beulich wrote:
>> >> >>> On 06.04.17 at 07:49, <yi.y.sun@xxxxxxxxxxxxxxx> wrote:
>> >> > On 17-04-05 09:10:58, Jan Beulich wrote:
>> >> >> >>> On 01.04.17 at 15:53, <yi.y.sun@xxxxxxxxxxxxxxx> wrote:
>> >> >> > +static void cat_init_feature(const struct cpuid_leaf *regs,
>> >> >> > +                             struct feat_node *feat,
>> >> >> > +                             struct psr_socket_info *info,
>> >> >> > +                             enum psr_feat_type type)
>> >> >> > +{
>> >> >> > +    unsigned int socket, i;
>> >> >> > +
>> >> >> > +    /* No valid value so do not enable feature. */
>> >> >> > +    if ( !regs->a || !regs->d )
>> >> >> > +        return;
>> >> >> > +
>> >> >> > +    feat->props->cbm_len = (regs->a & CAT_CBM_LEN_MASK) + 1;
>> >> >> > +    feat->props->cos_max = min(opt_cos_max, regs->d & 
>> >> >> > CAT_COS_MAX_MASK);
>> >> >> > +
>> >> >> > +    switch ( type )
>> >> >> > +    {
>> >> >> > +    case PSR_SOCKET_L3_CAT:
>> >> >> > +        /* cos=0 is reserved as default cbm(all bits within cbm_len 
>> >> >> > are 1). */
>> >> >> > +        feat->cos_reg_val[0] = 
>> >> >> > cat_default_val(feat->props->cbm_len);
>> >> >> > +
>> >> >> > +        /*
>> >> >> > +         * To handle cpu offline and then online case, we need 
>> >> >> > restore MSRs to
>> >> >> > +         * default values.
>> >> >> > +         */
>> >> >> > +        for ( i = 1; i <= feat->props->cos_max; i++ )
>> >> >> > +        {
>> >> >> > +            wrmsrl(MSR_IA32_PSR_L3_MASK(i), feat->cos_reg_val[0]);
>> >> >> > +            feat->cos_reg_val[i] = feat->cos_reg_val[0];
>> >> >> > +        }
>> >> >> 
>> >> >> I continue to have difficulty with this: Why is offline-then-online
>> >> >> any different from first-time-online? Why wouldn't setting the
>> >> > 
>> >> > May remove this comment. Per current codes, the MSRs are written to 
>> >> > default
>> >> > values no matter first time or not.
>> >> > 
>> >> >> registers to their intended values not be taken care of by
>> >> >> context switch code, once vCPU-s get scheduled onto the newly
>> >> >> onlined CPU?
>> >> >> 
>> >> > cat_init_feature is only called when the first CPU on a socket is 
>> >> > online.
>> >> > The MSRs to set are per socket. So, we only need set it once when socket
>> >> > is online.
>> >> 
>> >> This does not answer my question. Once again - why does this need
>> >> doing here explicitly, rather than relying on the needed values being
>> >> loaded when the first vCPU gets scheduled onto one of the pCPU-s
>> >> of this socket?
>> >> 
>> > I do not know if I understand your question correctly. Let me try to 
>> > explain
>> > again. As we discussed in v9, the MSRs values may be wrong values when 
>> > socket
>> > is online. That is the reason we have to restore them. 
>> > 
>> > The MSRs are per socket. That means only one group of MSRs on one socket. 
>> > So
>> > the setting on one CPU can make it valid on whole socket. The 
>> > 'cat_init_feature'
>> > is executed when the first CPU on a socket is online so we restore them 
>> > here.
>> 
>> All understood. But you write the MSRs with the needed values in
>> the context switch path, don't you? Why is that writing not
>> sufficient?
>> 
> No, in context switch path, we only set ASSOC register to set COS ID into it 
> so
> that the corresponding COS MSR value (CBM) can work.

Okay, so not the context switch path then, But you must be
changing the MSRs _somewhere_, and the question is why this
somewhere isn't sufficient.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
https://lists.xen.org/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.