[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[xen staging-4.18] x86/pv: Handle #PF correctly when reading the IO permission bitmap



commit 008808ac9523efcbdc514d8ae35b4db07bca16ec
Author:     Andrew Cooper <andrew.cooper3@xxxxxxxxxx>
AuthorDate: Tue Oct 29 16:38:29 2024 +0100
Commit:     Jan Beulich <jbeulich@xxxxxxxx>
CommitDate: Tue Oct 29 16:38:29 2024 +0100

    x86/pv: Handle #PF correctly when reading the IO permission bitmap
    
    The switch statement in guest_io_okay() is a very expensive way of
    pre-initialising x with ~0, and performing a partial read into it.
    
    However, the logic isn't correct either.
    
    In a real TSS, the CPU always reads two bytes (like here), and any TSS limit
    violation turns silently into no-access.  But, in-limit accesses trigger #PF
    as usual.  AMD document this property explicitly, and while Intel don't (so
    far as I can tell), they do behave consistently with AMD.
    
    Switch from __copy_from_guest_offset() to __copy_from_guest_pv(), like
    everything else in this file.  This removes code generation setting up
    copy_from_user_hvm() (in the likely path even), and safety LFENCEs from
    evaluate_nospec().
    
    Change the logic to raise #PF if __copy_from_guest_pv() fails, rather than
    disallowing the IO port access.  This brings the behaviour better in line 
with
    normal x86.
    
    Signed-off-by: Andrew Cooper <andrew.cooper3@xxxxxxxxxx>
    Reviewed-by: Jan Beulich <jbeulich@xxxxxxxx>
    master commit: 8a6c495d725408d333c1b47bb8af44615a5bfb18
    master date: 2024-10-01 14:58:18 +0100
---
 xen/arch/x86/pv/emul-priv-op.c | 27 ++++++++++++---------------
 1 file changed, 12 insertions(+), 15 deletions(-)

diff --git a/xen/arch/x86/pv/emul-priv-op.c b/xen/arch/x86/pv/emul-priv-op.c
index cc66ffbf8e..e35285d4ab 100644
--- a/xen/arch/x86/pv/emul-priv-op.c
+++ b/xen/arch/x86/pv/emul-priv-op.c
@@ -169,29 +169,26 @@ static int guest_io_okay(unsigned int port, unsigned int 
bytes,
 
     if ( (port + bytes) <= v->arch.pv.iobmp_limit )
     {
-        union { uint8_t bytes[2]; uint16_t mask; } x;
+        const void *__user addr = v->arch.pv.iobmp.p + (port >> 3);
+        uint16_t mask;
+        int rc;
 
-        /*
-         * Grab permission bytes from guest space. Inaccessible bytes are
-         * read as 0xff (no access allowed).
-         */
+        /* Grab permission bytes from guest space. */
         if ( user_mode )
             toggle_guest_pt(v);
 
-        switch ( __copy_from_guest_offset(x.bytes, v->arch.pv.iobmp,
-                                          port>>3, 2) )
-        {
-        default: x.bytes[0] = ~0;
-            /* fallthrough */
-        case 1:  x.bytes[1] = ~0;
-            /* fallthrough */
-        case 0:  break;
-        }
+        rc = __copy_from_guest_pv(&mask, addr, 2);
 
         if ( user_mode )
             toggle_guest_pt(v);
 
-        if ( (x.mask & (((1 << bytes) - 1) << (port & 7))) == 0 )
+        if ( rc )
+        {
+            x86_emul_pagefault(0, (unsigned long)addr + bytes - rc, ctxt);
+            return X86EMUL_EXCEPTION;
+        }
+
+        if ( (mask & (((1 << bytes) - 1) << (port & 7))) == 0 )
             return X86EMUL_OKAY;
     }
 
--
generated by git-patchbot for /home/xen/git/xen.git#staging-4.18



 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.