[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[PATCH v2] xen/x86: fix PV trap handling on secondary processors


  • To: Juergen Gross <jgross@xxxxxxxx>, Boris Ostrovsky <boris.ostrovsky@xxxxxxxxxx>
  • From: Jan Beulich <jbeulich@xxxxxxxx>
  • Date: Mon, 20 Sep 2021 18:15:11 +0200
  • Arc-authentication-results: i=1; mx.microsoft.com 1; spf=pass smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com; dkim=pass header.d=suse.com; arc=none
  • Arc-message-signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version; bh=WWTOVr6n18LVQWTFZ5IgrK+XEg6IjbR5yJNW3Rr67PA=; b=AVG841M2Mzt1tsK8PkNOGAtGxy2RtpQ1wN1/5zZDfCCbEJB2iEAYJtLrr7o6bxweDQuXU9rhV7p89grYMzaawnk+zX0dANA4er6eaBArFJdHyH4qU4SnHXwvb+4G5xrj5chDUXE/rCfaehaMOEcKZ8tt0j7ENv218Fwus84Fz1CMArClCUPnYNuleynZi5C7mQv2d64y78+j+ibu81DglHnJST6kwrlTThSnJ3Y0HM6ysUpkw8Mu70s2ilfwhc6xT7hQtJq6LtPPThAnuA/+Hqal6nECnYBaPEWCEt2B9b5zqO41Xsb0tcJNW2U15EM5btWk9qq/OUOtOdfzvsuocA==
  • Arc-seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=HnacVZSZLA6zNFiwykV0C43C/WgHxUJ9XeIxIy+upi30PoWB6o37wkrNvyS+z49GmrXR7zmw+qrL/Etcd9HNoXxobVQxJ9uMa8CO1up38Jhu+g++Yq1D48oLKw0SO+C7IC3EUs1i5cdfopZpPMw3ZlX1VRszixQYNWn1PJhIhDKn2hzalBl7zdvDhqkVbNF9gY5ML6L2y4Ss14z3KFIKdfewWLvzIl/fpkt30GRgOmDr9fnptW8aVJ5fCA5qKXU6t4z6OUhUNZZ2KJu9j5M1Ip8yFWSorUT6xZQLRnFUOvw5ue5mxLsfwVPUARgMZafZmbnG9wwb128ms1jrN7J1vA==
  • Authentication-results: lists.xenproject.org; dkim=none (message not signed) header.d=none;lists.xenproject.org; dmarc=none action=none header.from=suse.com;
  • Cc: Stefano Stabellini <sstabellini@xxxxxxxxxx>, lkml <linux-kernel@xxxxxxxxxxxxxxx>, "xen-devel@xxxxxxxxxxxxxxxxxxxx" <xen-devel@xxxxxxxxxxxxxxxxxxxx>
  • Delivery-date: Mon, 20 Sep 2021 16:15:25 +0000
  • List-id: Xen developer discussion <xen-devel.lists.xenproject.org>

The initial observation was that in PV mode under Xen 32-bit user space
didn't work anymore. Attempts of system calls ended in #GP(0x402). All
of the sudden the vector 0x80 handler was not in place anymore. As it
turns out up to 5.13 redundant initialization did occur: Once from
cpu_initialize_context() (through its VCPUOP_initialise hypercall) and a
2nd time while each CPU was brought fully up. This 2nd initialization is
now gone, uncovering that the 1st one was flawed: Unlike for the
set_trap_table hypercall, a full virtual IDT needs to be specified here;
the "vector" fields of the individual entries are of no interest. With
many (kernel) IDT entries still(?) (i.e. at that point at least) empty,
the syscall vector 0x80 ended up in slot 0x20 of the virtual IDT, thus
becoming the domain's handler for vector 0x20.

Make xen_convert_trap_info() fit for either purpose, leveraging the fact
that on the xen_copy_trap_info() path the table starts out zero-filled.
This includes moving out the writing of the sentinel, which would also
have lead to a buffer overrun in the xen_copy_trap_info() case if all
(kernel) IDT entries were populated. Convert the writing of the sentinel
to clearing of the entire table entry rather than just the address
field.

(I didn't bother trying to identify the commit which uncovered the issue
in 5.14; the commit named below is the one which actually introduced the
bad code.)

Fixes: f87e4cac4f4e ("xen: SMP guest support")
Cc: stable@xxxxxxxxxxxxxxx
Signed-off-by: Jan Beulich <jbeulich@xxxxxxxx>
---
In how far it is correct to use the current CPU's IDT is unclear to me.
Looks at least like another latent trap.
---
v2: Switch back to using xen_convert_trap_info().

--- a/arch/x86/xen/enlighten_pv.c
+++ b/arch/x86/xen/enlighten_pv.c
@@ -755,8 +755,8 @@ static void xen_write_idt_entry(gate_des
        preempt_enable();
 }
 
-static void xen_convert_trap_info(const struct desc_ptr *desc,
-                                 struct trap_info *traps)
+static unsigned xen_convert_trap_info(const struct desc_ptr *desc,
+                                     struct trap_info *traps, bool full)
 {
        unsigned in, out, count;
 
@@ -766,17 +766,18 @@ static void xen_convert_trap_info(const
        for (in = out = 0; in < count; in++) {
                gate_desc *entry = (gate_desc *)(desc->address) + in;
 
-               if (cvt_gate_to_trap(in, entry, &traps[out]))
+               if (cvt_gate_to_trap(in, entry, &traps[out]) || full)
                        out++;
        }
-       traps[out].address = 0;
+
+       return out;
 }
 
 void xen_copy_trap_info(struct trap_info *traps)
 {
        const struct desc_ptr *desc = this_cpu_ptr(&idt_desc);
 
-       xen_convert_trap_info(desc, traps);
+       xen_convert_trap_info(desc, traps, true);
 }
 
 /* Load a new IDT into Xen.  In principle this can be per-CPU, so we
@@ -786,6 +787,7 @@ static void xen_load_idt(const struct de
 {
        static DEFINE_SPINLOCK(lock);
        static struct trap_info traps[257];
+       unsigned out;
 
        trace_xen_cpu_load_idt(desc);
 
@@ -793,7 +795,8 @@ static void xen_load_idt(const struct de
 
        memcpy(this_cpu_ptr(&idt_desc), desc, sizeof(idt_desc));
 
-       xen_convert_trap_info(desc, traps);
+       out = xen_convert_trap_info(desc, traps, false);
+       memset(&traps[out], 0, sizeof(traps[0]));
 
        xen_mc_flush();
        if (HYPERVISOR_set_trap_table(traps))




 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.