[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH RFC 3/3] xtf: add minimal HPET functionality test



On 23/02/18 13:27, Roger Pau Monne wrote:
> Add a basic HPET functionality test, note that this test requires the
> HPET to support level triggered interrupts.
>
> Further improvements should add support for interrupt delivery, and
> testing all the available timers.
>
> Signed-off-by: Roger Pau Monné <roger.pau@xxxxxxxxxx>
> ---
> Cc: Andrew Cooper <andrew.cooper3@xxxxxxxxxx>
> ---
>  arch/x86/include/arch/lib.h |  14 ++++
>  docs/all-tests.dox          |   2 +
>  tests/hpet/Makefile         |   9 +++
>  tests/hpet/main.c           | 187 
> ++++++++++++++++++++++++++++++++++++++++++++
>  4 files changed, 212 insertions(+)
>  create mode 100644 tests/hpet/Makefile
>  create mode 100644 tests/hpet/main.c
>
> diff --git a/arch/x86/include/arch/lib.h b/arch/x86/include/arch/lib.h
> index 6714bdc..3400890 100644
> --- a/arch/x86/include/arch/lib.h
> +++ b/arch/x86/include/arch/lib.h
> @@ -392,6 +392,20 @@ static inline void write_xcr0(uint64_t xcr0)
>      xsetbv(0, xcr0);
>  }
>  
> +static inline uint64_t rdtsc(void)
> +{
> +    uint32_t low, high;
> +
> +    asm volatile ("rdtsc" : "=a" (low), "=d" (high));

For my own timing purposes, I've been using rdtscp because it is
strictly more helpful, but this isn't a general solution.

For rdtsc, (contrary to the way the other thread is progressing), what
matters is a dispatch serialising event, which is different to an
architecturally serialising event.

The easiest fix for now is to unconditionally use mfence, leaving a
comment saying that this should be lfence on Intel and when the AMD
pipeline is configured correctly.  Please name the function
rdtscp_ordered() though, to distinguish it from a plain rdtsc instruction.

> +
> +    return ((uint64_t)high << 32) | low;
> +}
> +
> +static inline void pause(void)
> +{
> +    asm volatile ("pause");
> +}
> +
>  #endif /* XTF_X86_LIB_H */
>  
>  /*
> diff --git a/docs/all-tests.dox b/docs/all-tests.dox
> index 355cb80..122840c 100644
> --- a/docs/all-tests.dox
> +++ b/docs/all-tests.dox
> @@ -127,4 +127,6 @@ guest breakout.
>  @subpage test-nested-svm - Nested SVM tests.
>  
>  @subpage test-nested-vmx - Nested VT-x tests.
> +
> +@subpage test-hpet - HPET functional test.

This page is sorted by test category first, but this is the
"in-development" section.

FWIW, I think "in-development" is probably a better category than
utility, because we will eventually want to get this test into automation.

>  */
> diff --git a/tests/hpet/Makefile b/tests/hpet/Makefile
> new file mode 100644
> index 0000000..934e63c
> --- /dev/null
> +++ b/tests/hpet/Makefile
> @@ -0,0 +1,9 @@
> +include $(ROOT)/build/common.mk
> +
> +NAME      := hpet
> +CATEGORY  := utility
> +TEST-ENVS := hvm32
> +
> +obj-perenv += main.o
> +
> +include $(ROOT)/build/gen.mk
> diff --git a/tests/hpet/main.c b/tests/hpet/main.c
> new file mode 100644
> index 0000000..57be410
> --- /dev/null
> +++ b/tests/hpet/main.c
> @@ -0,0 +1,187 @@
> +/**
> + * @file tests/hpet/main.c
> + * @ref test-hpet
> + *
> + * @page test-hpet hpet
> + *
> + * HPET functionality testing.
> + *
> + * Quite limited, currently only Timer N is tested. No interrupt delivery
> + * tests.
> + *
> + * @see tests/hpet/main.c
> + */
> +#include <xtf.h>
> +
> +#define HPET_BASE_ADDRESS       0xfed00000
> +
> +#define HPET_ID                 0
> +#define HPET_ID_NUMBER          0x1f00
> +#define HPET_ID_NUMBER_SHIFT    8
> +
> +#define HPET_PERIOD             0x004
> +#define HPET_MAX_PERIOD         0x05f5e100
> +
> +#define HPET_CFG                0x010
> +#define HPET_CFG_ENABLE         0x001
> +
> +#define HPET_STATUS             0x020
> +
> +#define HPET_COUNTER            0x0f0
> +
> +#define HPET_Tn_CFG(n)          (0x100 + (n) * 0x20)
> +#define HPET_TN_LEVEL           0x002
> +#define HPET_TN_ENABLE          0x004
> +#define HPET_TN_PERIODIC        0x008
> +#define HPET_TN_32BIT           0x100
> +#define HPET_TN_ROUTE_SHIFT     9
> +
> +#define HPET_Tn_CMP(n)          (0x108 + (n) * 0x20)
> +
> +/*
> + * NB: should probably be an explicit movl, but clang seems to generate good
> + * code.
> + */
> +#define HPET_REG(reg) (*(volatile uint32_t *)(_p(HPET_BASE_ADDRESS) + (reg)))

A lot of the above should be in a dedicated hpet driver, rather than in
the test.  See the selftest test_driver_init(), and apic.{h,c} which is
the closes similar example.

That said, HPET registers are in general 64 bits wide rather than 32. 
It is probably best to split the basic hpet infrastructure into a
separate patch from the test.

> +
> +#define MS_TO_NS                1000000
> +/* p is in fs */
> +#define MS_TO_TICKS(ms, p)      (((ms) * MS_TO_NS) / ((p) / 1000000))
> +
> +const char test_title[] = "Test HPET";
> +
> +static uint32_t freq;
> +
> +static void set_freq(void)
> +{
> +    uint32_t eax, ebx, ecx, edx, base;
> +    bool found = false;
> +
> +    /* Get tsc frequency from cpuid. */
> +    for ( base = XEN_CPUID_FIRST_LEAF;
> +          base < XEN_CPUID_FIRST_LEAF + 0x10000; base += 0x100 )
> +    {
> +        cpuid(base, &eax, &ebx, &ecx, &edx);
> +
> +        if ( (ebx == XEN_CPUID_SIGNATURE_EBX) &&
> +             (ecx == XEN_CPUID_SIGNATURE_ECX) &&
> +             (edx == XEN_CPUID_SIGNATURE_EDX) &&
> +             ((eax - base) >= 2) )
> +        {
> +            found = true;
> +            break;
> +        }
> +    }
> +
> +    if ( !found )
> +        panic("Unable to locate Xen CPUID leaves\n");
> +
> +    cpuid_count(base + 3, 0, &eax, &ebx, &freq, &edx);
> +    printk("TSC frequency %ukHz\n", freq);

Calculate what you need in arch/x86/setup.c and export it via
arch/x86/include/arch/cpuid.h

However, you can't rely on the frequency being constant.  It might be
better to busy wait on the percpu wallclock instead, if you don't want
to sort out interrupts.

~Andrew

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxxx
https://lists.xenproject.org/mailman/listinfo/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.