[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-devel] [PATCH v14 06/11] x86/hvm/ioreq: add a new mappable resource type...
>>> On 14.12.17 at 10:51, <Paul.Durrant@xxxxxxxxxx> wrote: >> From: Paul Durrant [mailto:paul.durrant@xxxxxxxxxx] >> Sent: 28 November 2017 15:09 >> +static int hvm_alloc_ioreq_mfn(struct hvm_ioreq_server *s, bool buf) >> +{ >> + struct domain *currd = current->domain; >> + struct hvm_ioreq_page *iorp = buf ? &s->bufioreq : &s->ioreq; >> + >> + if ( iorp->page ) >> + { >> + /* >> + * If a guest frame has already been mapped (which may happen >> + * on demand if hvm_get_ioreq_server_info() is called), then >> + * allocating a page is not permitted. >> + */ >> + if ( !gfn_eq(iorp->gfn, INVALID_GFN) ) >> + return -EPERM; >> + >> + return 0; >> + } >> + >> + /* >> + * Allocated IOREQ server pages are assigned to the emulating >> + * domain, not the target domain. This is because the emulator is >> + * likely to be destroyed after the target domain has been torn >> + * down, and we must use MEMF_no_refcount otherwise page allocation >> + * could fail if the emulating domain has already reached its >> + * maximum allocation. >> + */ >> + iorp->page = alloc_domheap_page(currd, MEMF_no_refcount); > > This is no longer going to work as it is predicated on my original > modification to HYPERVISOR_mmu_update (which allowed a PV domain to map a > foreign MFN from a domain over which it had privilege as if the MFN was > local). Because that mechanism was decided against, this code needs to change > to use the target domain of the ioreq server rather than the calling domain. > I will verfy this modification and submit v15 of the series. > > Jan, are you ok for me to keep your R-b? This is all pretty fragile - better drop it and I'll then take a look once you've sent the new version. Jan _______________________________________________ Xen-devel mailing list Xen-devel@xxxxxxxxxxxxxxxxxxxx https://lists.xenproject.org/mailman/listinfo/xen-devel
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |