[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH v4 04/10] x86/hvm: Collect information of TSC scaling ratio



On 01/18/16 11:45, Egger, Christoph wrote:
> On 17/01/16 22:58, Haozhong Zhang wrote:
> > Both VMX TSC scaling and SVM TSC ratio use the 64-bit TSC scaling ratio,
> > but the number of fractional bits of the ratio is different between VMX
> > and SVM. This patch adds the architecture code to collect the number of
> > fractional bits and other related information into fields of struct
> > hvm_function_table so that they can be used in the common code.
> > 
> > Signed-off-by: Haozhong Zhang <haozhong.zhang@xxxxxxxxx>
> > Reviewed-by: Kevin Tian <kevin.tian@xxxxxxxxx>
> > Reviewed-by: Boris Ostrovsky <boris.ostrovsky@xxxxxxxxxx>
> > ---
> > Changes in v4:
> >  (addressing Jan Beulich's comments in v3 patch 12)
> >  * Set TSC scaling parameters in hvm_funcs conditionally.
> >  * Remove TSC scaling parameter tsc_scaling_supported in hvm_funcs which
> >    can be derived from other parameters.
> >  (code cleanup)
> >  * Merge with v3 patch 11 "x86/hvm: Detect TSC scaling through hvm_funcs"
> >    whose work can be done early in this patch.
> > 
> >  xen/arch/x86/hvm/hvm.c        |  4 ++--
> >  xen/arch/x86/hvm/svm/svm.c    | 10 ++++++++--
> >  xen/arch/x86/time.c           |  9 ++++-----
> >  xen/include/asm-x86/hvm/hvm.h | 14 ++++++++++++++
> >  4 files changed, 28 insertions(+), 9 deletions(-)
> > 
> > diff --git a/xen/arch/x86/hvm/hvm.c b/xen/arch/x86/hvm/hvm.c
> > index 3648a44..6d30d8b 100644
> > --- a/xen/arch/x86/hvm/hvm.c
> > +++ b/xen/arch/x86/hvm/hvm.c
> > @@ -314,7 +314,7 @@ void hvm_set_guest_tsc_fixed(struct vcpu *v, u64 
> > guest_tsc, u64 at_tsc)
> >      else
> >      {
> >          tsc = at_tsc ?: rdtsc();
> > -        if ( cpu_has_tsc_ratio )
> > +        if ( hvm_tsc_scaling_supported )
> >              tsc = hvm_funcs.scale_tsc(v, tsc);
> >      }
> >  
> > @@ -346,7 +346,7 @@ u64 hvm_get_guest_tsc_fixed(struct vcpu *v, uint64_t 
> > at_tsc)
> >      else
> >      {
> >          tsc = at_tsc ?: rdtsc();
> > -        if ( cpu_has_tsc_ratio )
> > +        if ( hvm_tsc_scaling_supported )
> >              tsc = hvm_funcs.scale_tsc(v, tsc);
> >      }
> >  
> > diff --git a/xen/arch/x86/hvm/svm/svm.c b/xen/arch/x86/hvm/svm/svm.c
> > index 953e0b5..8b316a0 100644
> > --- a/xen/arch/x86/hvm/svm/svm.c
> > +++ b/xen/arch/x86/hvm/svm/svm.c
> > @@ -1450,6 +1450,14 @@ const struct hvm_function_table * __init 
> > start_svm(void)
> >      if ( !cpu_has_svm_nrips )
> >          clear_bit(SVM_FEATURE_DECODEASSISTS, &svm_feature_flags);
> >  
> > +    if ( cpu_has_tsc_ratio )
> > +    {
> > +        svm_function_table.default_tsc_scaling_ratio = DEFAULT_TSC_RATIO;
> > +        svm_function_table.max_tsc_scaling_ratio = ~TSC_RATIO_RSVD_BITS;
> > +        svm_function_table.tsc_scaling_ratio_frac_bits = 32;
> > +        svm_function_table.scale_tsc = svm_scale_tsc;
> > +    }
> > +
> >  #define P(p,s) if ( p ) { printk(" - %s\n", s); printed = 1; }
> >      P(cpu_has_svm_npt, "Nested Page Tables (NPT)");
> >      P(cpu_has_svm_lbrv, "Last Branch Record (LBR) Virtualisation");
> > @@ -2269,8 +2277,6 @@ static struct hvm_function_table __initdata 
> > svm_function_table = {
> >      .nhvm_vmcx_hap_enabled = nsvm_vmcb_hap_enabled,
> >      .nhvm_intr_blocked = nsvm_intr_blocked,
> >      .nhvm_hap_walk_L1_p2m = nsvm_hap_walk_L1_p2m,
> > -
> > -    .scale_tsc            = svm_scale_tsc,
> >  };
> >  
> >  void svm_vmexit_handler(struct cpu_user_regs *regs)
> > diff --git a/xen/arch/x86/time.c b/xen/arch/x86/time.c
> > index 988403a..a243bc3 100644
> > --- a/xen/arch/x86/time.c
> > +++ b/xen/arch/x86/time.c
> > @@ -37,7 +37,6 @@
> >  #include <asm/hpet.h>
> >  #include <io_ports.h>
> >  #include <asm/setup.h> /* for early_time_init */
> > -#include <asm/hvm/svm/svm.h> /* for cpu_has_tsc_ratio */
> >  #include <public/arch-x86/cpuid.h>
> >  
> >  /* opt_clocksource: Force clocksource to one of: pit, hpet, acpi. */
> > @@ -815,7 +814,7 @@ static void __update_vcpu_system_time(struct vcpu *v, 
> > int force)
> >      }
> >      else
> >      {
> > -        if ( has_hvm_container_domain(d) && cpu_has_tsc_ratio )
> > +        if ( has_hvm_container_domain(d) && hvm_tsc_scaling_supported )
> >          {
> >              tsc_stamp            = hvm_funcs.scale_tsc(v, 
> > t->local_tsc_stamp);
> >              _u.tsc_to_system_mul = d->arch.vtsc_to_ns.mul_frac;
> > @@ -1758,7 +1757,7 @@ void tsc_get_info(struct domain *d, uint32_t 
> > *tsc_mode,
> >                    uint32_t *incarnation)
> >  {
> >      bool_t enable_tsc_scaling = has_hvm_container_domain(d) &&
> > -                                cpu_has_tsc_ratio && !d->arch.vtsc;
> > +                                hvm_tsc_scaling_supported && !d->arch.vtsc;
> >  
> >      *incarnation = d->arch.incarnation;
> >      *tsc_mode = d->arch.tsc_mode;
> > @@ -1865,7 +1864,7 @@ void tsc_set_info(struct domain *d,
> >           */
> >          if ( tsc_mode == TSC_MODE_DEFAULT && host_tsc_is_safe() &&
> >               (has_hvm_container_domain(d) ?
> > -              d->arch.tsc_khz == cpu_khz || cpu_has_tsc_ratio :
> > +              d->arch.tsc_khz == cpu_khz || hvm_tsc_scaling_supported :
> >                incarnation == 0) )
> 
> cpu_khz varies not only across different machines with exact same
> CPU and same nominal cpu frequency it even differs across a reboot.
> This breaks migration when you migrate forth and back. This is a
> long-standing issue, no blocker to this patch.
>

If cpu_khz is changed after host reboots and a VM is later migrated
back to this host, it will be just like a normal migration. That is,
(1) if the host supports TSC scaling, then TSC scaling will enable
    the VM still using the original cpu_khz;
(2) otherwise, TSC emulation will take effect and make VM still gets
    TSC in the original cpu_khz.

Haozhong

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.