[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[Xen-devel] RE: [Xen-staging] [xen-unstable] svm: Improve emulation of SMSW instruction for memory operands.


  • To: xen-devel@xxxxxxxxxxxxxxxxxxx, xen-staging@xxxxxxxxxxxxxxxxxxx
  • From: "Petersson, Mats" <Mats.Petersson@xxxxxxx>
  • Date: Fri, 30 Mar 2007 18:21:57 +0200
  • Delivery-date: Fri, 30 Mar 2007 17:24:42 +0100
  • List-id: Xen developer discussion <xen-devel.lists.xensource.com>
  • Thread-index: Acdy5ovwI7VMxSrsTlOEtlVS4Bsa2gAAEcDQ
  • Thread-topic: [Xen-staging] [xen-unstable] svm: Improve emulation of SMSW instruction for memory operands.

 

> -----Original Message-----
> From: xen-staging-bounces@xxxxxxxxxxxxxxxxxxx 
> [mailto:xen-staging-bounces@xxxxxxxxxxxxxxxxxxx] On Behalf Of 
> Xen staging patchbot-unstable
> Sent: 30 March 2007 17:14
> To: xen-staging@xxxxxxxxxxxxxxxxxxx
> Subject: [Xen-staging] [xen-unstable] svm: Improve emulation 
> of SMSW instruction for memory operands.
> 
> # HG changeset patch
> # User Keir Fraser <keir@xxxxxxxxxxxxx>
> # Date 1175271230 -3600
> # Node ID d3b1341d83db9445ef407cd50d7010b91f320043
> # Parent  3c0d15279dc7a5fae7b01c9d27abe0a31ddc0545
> svm: Improve emulation of SMSW instruction for memory operands.
> From: Trolle Selander <trolle.selander@xxxxxxxxx>
> Signed-off-by: Keir Fraser <keir@xxxxxxxxxxxxx>
> ---
>  xen/arch/x86/hvm/svm/svm.c |   64 
> ++++++++++++++++++++++++++++++++++++++++-----
>  1 files changed, 58 insertions(+), 6 deletions(-)
> 
> diff -r 3c0d15279dc7 -r d3b1341d83db xen/arch/x86/hvm/svm/svm.c
> --- a/xen/arch/x86/hvm/svm/svm.c      Fri Mar 30 17:02:46 2007 +0100
> +++ b/xen/arch/x86/hvm/svm/svm.c      Fri Mar 30 17:13:50 2007 +0100
> @@ -1866,11 +1866,13 @@ static int svm_cr_access(struct vcpu *v,
>  {
>      struct vmcb_struct *vmcb = v->arch.hvm_svm.vmcb;
>      int inst_len = 0;
> -    int index;
> -    unsigned int gpreg;
> -    unsigned long value;
> +    int index,addr_size,i;
> +    unsigned int gpreg,offset;
> +    unsigned long value,addr;
>      u8 buffer[MAX_INST_LEN];   
>      u8 prefix = 0;
> +    u8 modrm;
> +    enum x86_segment seg;
>      int result = 1;
>      enum instruction_index list_a[] = {INSTR_MOV2CR, 
> INSTR_CLTS, INSTR_LMSW};
>      enum instruction_index list_b[] = {INSTR_MOVCR2, INSTR_SMSW};
> @@ -1931,9 +1933,59 @@ static int svm_cr_access(struct vcpu *v,
>          break;
>  
>      case INSTR_SMSW:
> -        value = v->arch.hvm_svm.cpu_shadow_cr0;
> -        gpreg = decode_src_reg(prefix, buffer[index+2]);
> -        set_reg(gpreg, value, regs, vmcb);
> +        value = v->arch.hvm_svm.cpu_shadow_cr0 & 0xFFFF;
> +        modrm = buffer[index+2];
> +        addr_size = svm_guest_x86_mode( v );
> +        if ( likely((modrm & 0xC0) >> 6 == 3) )
> +        {
> +            gpreg = decode_src_reg(prefix, modrm);
> +            set_reg(gpreg, value, regs, vmcb);
> +        }
> +        /*
> +         * For now, only implement decode of the offset 
> mode, since that's the
> +         * only mode observed in a real-world OS. This code 
> is also making the
> +         * assumption that we'll never hit this code in long mode.
> +         */

Shouldn't such assumptions be checked by a "BUG_ON", "ASSERT" or
similar? It doesn't seem like difficult thing to check to me. And it's
heck of a lot easier to figure out why it went completely horribly
wrong, than when the code just happily continues to execute, but in the
wrong place or in some incorrect way?

[I agree that there's like 0.1% chance that someone uses SMSW in 64-bit
mode, but it's still a VALID instruction in that mode - and perhaps more
importantly, it may be used in different addressing mode in 32-bit
code.]

--
Mats
> +        else if ( (modrm == 0x26) || (modrm == 0x25) )
> +        {   
> +            seg = x86_seg_ds;
> +            i = index;
> +            /* Segment or address size overrides? */
> +            while ( i-- )
> +            {
> +                switch ( buffer[i] )
> +                {
> +                   case 0x26: seg = x86_seg_es; break;
> +                   case 0x2e: seg = x86_seg_cs; break;
> +                   case 0x36: seg = x86_seg_ss; break;
> +                   case 0x64: seg = x86_seg_fs; break;
> +                   case 0x65: seg = x86_seg_gs; break;
> +                   case 0x67: addr_size ^= 6;   break;
> +                }
> +            }
> +            /* Bail unless this really is a seg_base + offset case */
> +            if ( ((modrm == 0x26) && (addr_size == 4)) ||
> +                 ((modrm == 0x25) && (addr_size == 2)) )
> +            {
> +                gdprintk(XENLOG_ERR, "SMSW emulation at 
> guest address: "
> +                         "%lx failed due to unhandled 
> addressing mode."
> +                         "ModRM byte was: %x \n", 
> svm_rip2pointer(v), modrm);
> +                domain_crash(v->domain);
> +            }
> +            inst_len += addr_size;
> +            offset = *(( unsigned int *) ( void *) 
> &buffer[index + 3]);
> +            offset = ( addr_size == 4 ) ? offset : ( offset 
> & 0xFFFF );
> +            addr = hvm_get_segment_base(v, seg);
> +            addr += offset;
> +            hvm_copy_to_guest_virt(addr,&value,2);
> +        }
> +        else
> +        {
> +           gdprintk(XENLOG_ERR, "SMSW emulation at guest 
> address: %lx "
> +                    "failed due to unhandled addressing mode!"
> +                    "ModRM byte was: %x \n", 
> svm_rip2pointer(v), modrm);
> +           domain_crash(v->domain);
> +        }
>          break;
>  
>      default:
> 
> _______________________________________________
> Xen-staging mailing list
> Xen-staging@xxxxxxxxxxxxxxxxxxx
> http://lists.xensource.com/xen-staging
> 
> 
> 



_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel



 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.