[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[Xen-devel] [BUG] incorrect goto in gnttab_setup_table overdecrements the preemption counter


  • To: xen-devel@xxxxxxxxxxxxx
  • From: Jann Horn <jannh@xxxxxxxxxx>
  • Date: Wed, 29 Nov 2017 15:23:04 +0100
  • Delivery-date: Wed, 29 Nov 2017 14:23:32 +0000
  • List-id: Xen developer discussion <xen-devel.lists.xenproject.org>

gnttab_setup_table() has the following code:

=============================================
static long
gnttab_setup_table(
    XEN_GUEST_HANDLE_PARAM(gnttab_setup_table_t) uop, unsigned int count)
{
    struct gnttab_setup_table op;
    struct domain *d;
    struct grant_table *gt;
    int            i;
    xen_pfn_t  gmfn;

    [...]

    d = rcu_lock_domain_by_any_id(op.dom);
    if ( d == NULL )
    {
        gdprintk(XENLOG_INFO, "Bad domid %d.\n", op.dom);
        op.status = GNTST_bad_domain;
        goto out2;
    }

    [...]
 out2:
    rcu_unlock_domain(d);
 out1:
    if ( unlikely(__copy_field_to_guest(uop, &op, status)) )
        return -EFAULT;

    return 0;
}
=============================================

In the case where a bad domain ID is supplied (`d == NULL`),
`rcu_unlock_domain(d)` is executed. `rcu_unlock_domain()` is defined
as follows:

=============================================
static inline void rcu_unlock_domain(struct domain *d)
{
    if ( d != current->domain )
        rcu_read_unlock(d);
}
#define rcu_read_unlock(x)     ({ ((void)(x)); preempt_enable(); })
#define preempt_enable() do {                   \
    barrier();                                  \
    preempt_count()--;                          \
} while (0)
=============================================

This means that the preemption counter is decremented without a
corresponding increment, leaving it at UINT_MAX. In debug builds, this
causes an assertion failure in ASSERT_NOT_IN_ATOMIC on hypercall
return; however, in release builds, it's harmless because the
preemption counter isn't used anywhere outside preempt_enable() and
preempt_disable(), which only increment and decrement it. (There are
some uses of in_atomic() in hvm.c, but those have been disabled with
"#if 0" since at least Xen 4.5.5, the oldest supported release.)


The following code can be used in a 64-bit PV guest to reproduce the bug:

=============================================
root@pv-guest:~/borkmod4# cat borker.c
#include <linux/module.h>
#include <linux/kernel.h>
#include <asm/xen/page.h>
#include <asm/desc.h>
#include <asm/processor.h>
#include <asm/xen/hypercall.h>
#include <asm/xen/interface.h>
#include <xen/interface/xen.h>
#include <xen/interface/version.h>
#include <xen/interface/memory.h>

static int __init init_mod(void) {
  struct gnttab_setup_table args = {
    .dom = 0xeeee /* invalid domain ID */
  };
  HYPERVISOR_grant_table_op(GNTTABOP_setup_table, &args, 1);

  return -EINVAL;
}

module_init(init_mod);
MODULE_LICENSE("GPL v2");
root@pv-guest:~/borkmod4# cat Makefile
obj-m := borker.o
KDIR := /lib/modules/$(shell uname -r)/build
PWD := $(shell pwd)

all:
$(MAKE) -C $(KDIR) M=$(PWD) modules

clean:
$(MAKE) -C $(KDIR) M=$(PWD) clean

root@pv-guest:~/borkmod4# make
make -C /lib/modules/4.9.0-4-amd64/build M=/root/borkmod4 modules
make[1]: Entering directory '/usr/src/linux-headers-4.9.0-4-amd64'
  Building modules, stage 2.
  MODPOST 1 modules
make[1]: Leaving directory '/usr/src/linux-headers-4.9.0-4-amd64'
root@pv-guest:~/borkmod4# insmod borker.ko
=============================================

This results in the following crash in a debug build of Xen 4.9.1:

=============================================
(XEN) grant_table.c:1646:d1v0 Bad domid 61166.
(XEN) Assertion '!preempt_count()' failed at preempt.c:36
(XEN) ----[ Xen-4.9.1  x86_64  debug=y   Not tainted ]----
(XEN) CPU:    0
(XEN) RIP:    e008:[<ffff82d080224737>] ASSERT_NOT_IN_ATOMIC+0x46/0x4c
(XEN) RFLAGS: 0000000000010286   CONTEXT: hypervisor (d1v0)
(XEN) rax: ffff82d08058ed28   rbx: ffff8300bfc22000   rcx: 0000000000000000
(XEN) rdx: 0000000000000000   rsi: 000000000000000a   rdi: ffffc90040c07cc0
(XEN) rbp: ffff8300bfc57f08   rsp: ffff8300bfc57f08   r8:  ffff83022d3bc000
(XEN) r9:  0000000000000004   r10: 0000000000000004   r11: 0000000000000005
(XEN) r12: ffff88001512d780   r13: ffff880014c98980   r14: ffffffffc006b000
(XEN) r15: ffffffffc006b050   cr0: 0000000080050033   cr4: 00000000001506e4
(XEN) cr3: 0000000005e51000   cr2: ffff880005587058
(XEN) fsb: 00007f471797b700   gsb: ffff880018c00000   gss: 0000000000000000
(XEN) ds: 0000   es: 0000   fs: 0000   gs: 0000   ss: e010   cs: e008
(XEN) Xen code around <ffff82d080224737> (ASSERT_NOT_IN_ATOMIC+0x46/0x4c):
(XEN)  58 f6 c4 02 74 06 5d c3 <0f> 0b 0f 0b 0f 0b 55 48 89 e5 48 8d 05 e4 a5 36
(XEN) Xen stack trace from rsp=ffff8300bfc57f08:
(XEN)    00007cff403a80c7 ffff82d080360ffc ffff880013a17100 ffff880013a176b8
(XEN)    ffff880013a17100 ffff880018c18d10 ffffffffc0096000 0000000000000000
(XEN)    0000000000000246 0000000000007ff0 000000000000003a 000000000001f958
(XEN)    0000000000000000 ffffffff8100128a deadbeefdeadf00d deadbeefdeadf00d
(XEN)    deadbeefdeadf00d 0001010000000000 ffffffff8100128a 000000000000e033
(XEN)    0000000000000246 ffffc90040c07ca0 000000000000e02b 000000000009dfa3
(XEN)    003bb8b800000001 003bbd140009dfa3 000000000009dfb0 003bbcec00000000
(XEN)    ffff8300bfc22000 0000000000000000 00000000001506e4
(XEN) Xen call trace:
(XEN)    [<ffff82d080224737>] ASSERT_NOT_IN_ATOMIC+0x46/0x4c
(XEN)    [<ffff82d080360ffc>] entry.o#test_all_events+0x6/0x30
(XEN)
(XEN)
(XEN) ****************************************
(XEN) Panic on CPU 0:
(XEN) Assertion '!preempt_count()' failed at preempt.c:36
(XEN) ****************************************
(XEN)
(XEN) Reboot in five seconds...
=============================================

This bug was found using a fuzzer based on AFL and TriforceAFL.

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxxx
https://lists.xenproject.org/mailman/listinfo/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.