[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH 2/2] xen/x86: Introduce a new VMASSIST for architectural behaviour of iopl



>>> On 16.03.16 at 21:05, <andrew.cooper3@xxxxxxxxxx> wrote:
> @@ -1742,8 +1742,10 @@ static void load_segments(struct vcpu *n)
>              cs_and_mask = (unsigned short)regs->cs |
>                  ((unsigned int)vcpu_info(n, evtchn_upcall_mask) << 16);
>              /* Fold upcall mask into RFLAGS.IF. */
> -            eflags  = regs->_eflags & ~X86_EFLAGS_IF;
> +            eflags  = regs->_eflags & ~(X86_EFLAGS_IF|X86_EFLAGS_IOPL);

This and ...

> @@ -1788,8 +1790,10 @@ static void load_segments(struct vcpu *n)
>              ((unsigned long)vcpu_info(n, evtchn_upcall_mask) << 32);
>  
>          /* Fold upcall mask into RFLAGS.IF. */
> -        rflags  = regs->rflags & ~X86_EFLAGS_IF;
> +        rflags  = regs->rflags & ~(X86_EFLAGS_IF|X86_EFLAGS_IOPL);

... this is not really necessary (but also not wrong) - the actual
EFLAGS.IOPL is always zero (and assumed to be so by code
further down from the respective adjustments you make). For
consistency's sake it might be better to either drop the changes
here, or also adjust the two places masking regs->eflags.

> --- a/xen/arch/x86/traps.c
> +++ b/xen/arch/x86/traps.c
> @@ -1806,7 +1806,9 @@ static int guest_io_okay(
>  #define TOGGLE_MODE() if ( user_mode ) toggle_guest_mode(v)
>  
>      if ( !vm86_mode(regs) &&
> -         (v->arch.pv_vcpu.iopl >= (guest_kernel_mode(v, regs) ? 1 : 3)) )
> +         (MASK_EXTR(v->arch.pv_vcpu.iopl, X86_EFLAGS_IOPL) >=
> +          (guest_kernel_mode(v, regs) ?
> +           (VM_ASSIST(v->domain, architectural_iopl) ? 0 : 1) : 3)) )
>          return 1;
>  
>      if ( v->arch.pv_vcpu.iobmp_limit > (port + bytes) )
> @@ -2367,7 +2369,9 @@ static int emulate_privileged_op(struct cpu_user_regs 
> *regs)
>  
>      case 0xfa: /* CLI */
>      case 0xfb: /* STI */
> -        if ( v->arch.pv_vcpu.iopl < (guest_kernel_mode(v, regs) ? 1 : 3) )
> +        if ( MASK_EXTR(v->arch.pv_vcpu.iopl, X86_EFLAGS_IOPL) <
> +             (guest_kernel_mode(v, regs) ?
> +              (VM_ASSIST(v->domain, architectural_iopl) ? 0 : 1) : 3) )
>              goto fail;

The similarity of the two together with the growing complexity
suggests to make this a macro or inline function. Additionally
resulting binary code would likely be better if you compared
v->arch.pv_vcpu.iopl with MASK_INSR(<literal value>,
X86_EFLAGS_IOPL), even if that means having three
MASK_INSR() (albeit those perhaps again would be hidden in
a macro, e.g.

#define IOPL(n) MASK_INSR(n, X86_EFLAGS_IOPL)

).

> --- a/xen/arch/x86/x86_64/compat/entry.S
> +++ b/xen/arch/x86/x86_64/compat/entry.S
> @@ -277,9 +277,14 @@ compat_create_bounce_frame:
>          testb %al,%al                   # Bits 0-7: saved_upcall_mask
>          setz  %ch                       # %ch == !saved_upcall_mask
>          movl  UREGS_eflags+8(%rsp),%eax
> -        andl  $~X86_EFLAGS_IF,%eax
> +        andl  $~(X86_EFLAGS_IF|X86_EFLAGS_IOPL),%eax

See earlier comment.

>          addb  %ch,%ch                   # Bit 9 (EFLAGS.IF)
>          orb   %ch,%ah                   # Fold EFLAGS.IF into %eax
> +        movq  VCPU_domain(%rbx),%rcx    # if ( VM_ASSIST(v->domain, 
> architectural_iopl) )

If you used another register, this could be pulled up quite a bit,
to hide the latency of the load before the loaded value gets used.

> +        testb $1 << VMASST_TYPE_architectural_iopl,DOMAIN_vm_assist(%rcx)
> +        jz    .Lft6
> +        movzwl VCPU_iopl(%rbx),%ecx     # Bits 13:12 (EFLAGS.IOPL)

Why not just MOVL?

> +        orl   %ecx,%eax                 # Fold EFLAGS.IOPL into %eax

Also I continue to think this would better be done with CMOVcc,
avoiding yet another conditional branch here.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.