[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] RE: [PATCH v3] x86/mm: Short circuit damage from "fishy" ref/typecount failure
> -----Original Message----- > From: Andrew Cooper <andrew.cooper3@xxxxxxxxxx> > Sent: 19 January 2021 13:03 > To: Xen-devel <xen-devel@xxxxxxxxxxxxxxxxxxxx> > Cc: Andrew Cooper <andrew.cooper3@xxxxxxxxxx>; Jan Beulich > <JBeulich@xxxxxxxx>; Roger Pau Monné > <roger.pau@xxxxxxxxxx>; Wei Liu <wl@xxxxxxx>; Paul Durrant <paul@xxxxxxx>; > Tamas K Lengyel > <tamas@xxxxxxxxxxxxx> > Subject: [PATCH v3] x86/mm: Short circuit damage from "fishy" ref/typecount > failure > > This code has been copied in 3 places, but it is problematic. > > All cases will hit a BUG() later in domain teardown, when a the missing > type/count reference is underflowed. > > Don't complicated the logic by leaving a totally unqualified domain crash, and > a timebomb which will be triggered by the toolstack at a slightly later, and > seemingly unrelated, point. > > Signed-off-by: Andrew Cooper <andrew.cooper3@xxxxxxxxxx> > --- > CC: Jan Beulich <JBeulich@xxxxxxxx> > CC: Roger Pau Monné <roger.pau@xxxxxxxxxx> > CC: Wei Liu <wl@xxxxxxx> > CC: Paul Durrant <paul@xxxxxxx> > CC: Tamas K Lengyel <tamas@xxxxxxxxxxxxx> > > v3: > * Actually include the typo corrects to make it compile. Looks ok now :-) Reviewed-by: Paul Durrant <paul@xxxxxxx> > v2: > * Reword the commit message. > * Switch BUG() to BUG_ON() to further reduce code volume. > --- > xen/arch/x86/hvm/ioreq.c | 11 ++--------- > xen/arch/x86/hvm/vmx/vmx.c | 11 ++--------- > xen/arch/x86/mm/mem_paging.c | 17 ++++------------- > 3 files changed, 8 insertions(+), 31 deletions(-) > > diff --git a/xen/arch/x86/hvm/ioreq.c b/xen/arch/x86/hvm/ioreq.c > index 1cc27df87f..0c38cfa151 100644 > --- a/xen/arch/x86/hvm/ioreq.c > +++ b/xen/arch/x86/hvm/ioreq.c > @@ -366,15 +366,8 @@ static int hvm_alloc_ioreq_mfn(struct hvm_ioreq_server > *s, bool buf) > if ( !page ) > return -ENOMEM; > > - if ( !get_page_and_type(page, s->target, PGT_writable_page) ) > - { > - /* > - * The domain can't possibly know about this page yet, so failure > - * here is a clear indication of something fishy going on. > - */ > - domain_crash(s->emulator); > - return -ENODATA; > - } > + /* Domain can't know about this page yet - something fishy going on. */ > + BUG_ON(!get_page_and_type(page, s->target, PGT_writable_page)); > > iorp->va = __map_domain_page_global(page); > if ( !iorp->va ) > diff --git a/xen/arch/x86/hvm/vmx/vmx.c b/xen/arch/x86/hvm/vmx/vmx.c > index 2d4475ee3d..4120234c15 100644 > --- a/xen/arch/x86/hvm/vmx/vmx.c > +++ b/xen/arch/x86/hvm/vmx/vmx.c > @@ -3042,15 +3042,8 @@ static int vmx_alloc_vlapic_mapping(struct domain *d) > if ( !pg ) > return -ENOMEM; > > - if ( !get_page_and_type(pg, d, PGT_writable_page) ) > - { > - /* > - * The domain can't possibly know about this page yet, so failure > - * here is a clear indication of something fishy going on. > - */ > - domain_crash(d); > - return -ENODATA; > - } > + /* Domain can't know about this page yet - something fishy going on. */ > + BUG_ON(!get_page_and_type(pg, d, PGT_writable_page)); > > mfn = page_to_mfn(pg); > clear_domain_page(mfn); > diff --git a/xen/arch/x86/mm/mem_paging.c b/xen/arch/x86/mm/mem_paging.c > index 01281f786e..60d667ae94 100644 > --- a/xen/arch/x86/mm/mem_paging.c > +++ b/xen/arch/x86/mm/mem_paging.c > @@ -379,19 +379,10 @@ static int prepare(struct domain *d, gfn_t gfn, > page = alloc_domheap_page(d, 0); > if ( unlikely(page == NULL) ) > goto out; > - if ( unlikely(!get_page(page, d)) ) > - { > - /* > - * The domain can't possibly know about this page yet, so failure > - * here is a clear indication of something fishy going on. > - */ > - gprintk(XENLOG_ERR, > - "%pd: fresh page for GFN %"PRI_gfn" in unexpected > state\n", > - d, gfn_x(gfn)); > - domain_crash(d); > - page = NULL; > - goto out; > - } > + > + /* Domain can't know about this page yet - something fishy going on. > */ > + BUG_ON(!get_page(page, d)); > + > mfn = page_to_mfn(page); > page_extant = 0; > > -- > 2.11.0
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |