[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-devel] [PATCH v3 1/3] x86/ioreq server(patch for 4.7): Rename p2m_mmio_write_dm to p2m_ioreq_server.
On 25/04/16 15:01, Paul Durrant wrote: >> -----Original Message----- >> From: dunlapg@xxxxxxxxx [mailto:dunlapg@xxxxxxxxx] On Behalf Of >> George Dunlap >> Sent: 25 April 2016 14:39 >> To: Yu Zhang >> Cc: xen-devel@xxxxxxxxxxxxx; Kevin Tian; Keir (Xen.org); Jun Nakajima; >> Andrew Cooper; Tim (Xen.org); Paul Durrant; Lv, Zhiyuan; Jan Beulich; Wei Liu >> Subject: Re: [Xen-devel] [PATCH v3 1/3] x86/ioreq server(patch for 4.7): >> Rename p2m_mmio_write_dm to p2m_ioreq_server. >> >> On Mon, Apr 25, 2016 at 11:35 AM, Yu Zhang <yu.c.zhang@xxxxxxxxxxxxxxx> >> wrote: >>> Previously p2m type p2m_mmio_write_dm was introduced for write- >>> protected memory pages whose write operations are supposed to be >>> forwarded to and emulated by an ioreq server. Yet limitations of >>> rangeset restrict the number of guest pages to be write-protected. >>> >>> This patch replaces the p2m type p2m_mmio_write_dm with a new name: >>> p2m_ioreq_server, which means this p2m type can be claimed by one >>> ioreq server, instead of being tracked inside the rangeset of ioreq >>> server. Patches following up will add the related hvmop handling >>> code which map/unmap type p2m_ioreq_server to/from an ioreq server. >>> >>> changes in v3: >>> - According to Jan & George's comments, keep >> HVMMEM_mmio_write_dm >>> for old xen interface versions, and replace it with HVMMEM_unused >>> for xen interfaces newer than 4.7.0; For p2m_ioreq_server, a new >>> enum - HVMMEM_ioreq_server is introduced for the get/set mem type >>> interfaces; >>> - Add George's Reviewed-by and Acked-by from Tim & Andrew. >> >> Unfortunately these rather contradict each other -- I consider >> Reviewed-by to only stick if the code I've specified hasn't changed >> (or has only changed trivially). >> >> Also... >> >>> >>> changes in v2: >>> - According to George Dunlap's comments, only rename the p2m type, >>> with no behavior changes. >>> >>> Signed-off-by: Paul Durrant <paul.durrant@xxxxxxxxxx> >>> Signed-off-by: Yu Zhang <yu.c.zhang@xxxxxxxxxxxxxxx> >>> Acked-by: Tim Deegan <tim@xxxxxxx> >>> Acked-by: Andrew Cooper <andrew.cooper3@xxxxxxxxxx> >>> Reviewed-by: George Dunlap <george.dunlap@xxxxxxxxxx> >>> Cc: Keir Fraser <keir@xxxxxxx> >>> Cc: Jan Beulich <jbeulich@xxxxxxxx> >>> Cc: Andrew Cooper <andrew.cooper3@xxxxxxxxxx> >>> Cc: Jun Nakajima <jun.nakajima@xxxxxxxxx> >>> Cc: Kevin Tian <kevin.tian@xxxxxxxxx> >>> Cc: George Dunlap <george.dunlap@xxxxxxxxxxxxx> >>> Cc: Tim Deegan <tim@xxxxxxx> >>> --- >>> xen/arch/x86/hvm/hvm.c | 14 ++++++++------ >>> xen/arch/x86/mm/p2m-ept.c | 2 +- >>> xen/arch/x86/mm/p2m-pt.c | 2 +- >>> xen/arch/x86/mm/shadow/multi.c | 2 +- >>> xen/include/asm-x86/p2m.h | 4 ++-- >>> xen/include/public/hvm/hvm_op.h | 8 +++++++- >>> 6 files changed, 20 insertions(+), 12 deletions(-) >>> >>> diff --git a/xen/arch/x86/hvm/hvm.c b/xen/arch/x86/hvm/hvm.c >>> index f24126d..874cb0f 100644 >>> --- a/xen/arch/x86/hvm/hvm.c >>> +++ b/xen/arch/x86/hvm/hvm.c >>> @@ -1857,7 +1857,7 @@ int hvm_hap_nested_page_fault(paddr_t gpa, >> unsigned long gla, >>> */ >>> if ( (p2mt == p2m_mmio_dm) || >>> (npfec.write_access && >>> - (p2m_is_discard_write(p2mt) || (p2mt == p2m_mmio_write_dm))) ) >>> + (p2m_is_discard_write(p2mt) || (p2mt == p2m_ioreq_server))) ) >>> { >>> __put_gfn(p2m, gfn); >>> if ( ap2m_active ) >>> @@ -5499,8 +5499,8 @@ long do_hvm_op(unsigned long op, >> XEN_GUEST_HANDLE_PARAM(void) arg) >>> get_gfn_query_unlocked(d, a.pfn, &t); >>> if ( p2m_is_mmio(t) ) >>> a.mem_type = HVMMEM_mmio_dm; >>> - else if ( t == p2m_mmio_write_dm ) >>> - a.mem_type = HVMMEM_mmio_write_dm; >>> + else if ( t == p2m_ioreq_server ) >>> + a.mem_type = HVMMEM_ioreq_server; >>> else if ( p2m_is_readonly(t) ) >>> a.mem_type = HVMMEM_ram_ro; >>> else if ( p2m_is_ram(t) ) >>> @@ -5531,7 +5531,8 @@ long do_hvm_op(unsigned long op, >> XEN_GUEST_HANDLE_PARAM(void) arg) >>> [HVMMEM_ram_rw] = p2m_ram_rw, >>> [HVMMEM_ram_ro] = p2m_ram_ro, >>> [HVMMEM_mmio_dm] = p2m_mmio_dm, >>> - [HVMMEM_mmio_write_dm] = p2m_mmio_write_dm >>> + [HVMMEM_unused] = p2m_invalid, >>> + [HVMMEM_ioreq_server] = p2m_ioreq_server >>> }; >>> >>> if ( copy_from_guest(&a, arg, 1) ) >>> @@ -5555,7 +5556,8 @@ long do_hvm_op(unsigned long op, >> XEN_GUEST_HANDLE_PARAM(void) arg) >>> ((a.first_pfn + a.nr - 1) > domain_get_maximum_gpfn(d)) ) >>> goto setmemtype_fail; >>> >>> - if ( a.hvmmem_type >= ARRAY_SIZE(memtype) ) >>> + if ( a.hvmmem_type >= ARRAY_SIZE(memtype) || >>> + unlikely(a.hvmmem_type == HVMMEM_unused) ) >>> goto setmemtype_fail; >>> >>> while ( a.nr > start_iter ) >>> @@ -5579,7 +5581,7 @@ long do_hvm_op(unsigned long op, >> XEN_GUEST_HANDLE_PARAM(void) arg) >>> } >>> if ( !p2m_is_ram(t) && >>> (!p2m_is_hole(t) || a.hvmmem_type != HVMMEM_mmio_dm) >> && >>> - (t != p2m_mmio_write_dm || a.hvmmem_type != >> HVMMEM_ram_rw) ) >>> + (t != p2m_ioreq_server || a.hvmmem_type != >> HVMMEM_ram_rw) ) >>> { >>> put_gfn(d, pfn); >>> goto setmemtype_fail; >>> diff --git a/xen/arch/x86/mm/p2m-ept.c b/xen/arch/x86/mm/p2m-ept.c >>> index 3cb6868..380ec25 100644 >>> --- a/xen/arch/x86/mm/p2m-ept.c >>> +++ b/xen/arch/x86/mm/p2m-ept.c >>> @@ -171,7 +171,7 @@ static void ept_p2m_type_to_flags(struct >> p2m_domain *p2m, ept_entry_t *entry, >>> entry->a = entry->d = !!cpu_has_vmx_ept_ad; >>> break; >>> case p2m_grant_map_ro: >>> - case p2m_mmio_write_dm: >>> + case p2m_ioreq_server: >>> entry->r = 1; >>> entry->w = entry->x = 0; >>> entry->a = !!cpu_has_vmx_ept_ad; >>> diff --git a/xen/arch/x86/mm/p2m-pt.c b/xen/arch/x86/mm/p2m-pt.c >>> index 3d80612..eabd2e3 100644 >>> --- a/xen/arch/x86/mm/p2m-pt.c >>> +++ b/xen/arch/x86/mm/p2m-pt.c >>> @@ -94,7 +94,7 @@ static unsigned long p2m_type_to_flags(p2m_type_t >> t, mfn_t mfn, >>> default: >>> return flags | _PAGE_NX_BIT; >>> case p2m_grant_map_ro: >>> - case p2m_mmio_write_dm: >>> + case p2m_ioreq_server: >>> return flags | P2M_BASE_FLAGS | _PAGE_NX_BIT; >>> case p2m_ram_ro: >>> case p2m_ram_logdirty: >>> diff --git a/xen/arch/x86/mm/shadow/multi.c >> b/xen/arch/x86/mm/shadow/multi.c >>> index e5c8499..c81302a 100644 >>> --- a/xen/arch/x86/mm/shadow/multi.c >>> +++ b/xen/arch/x86/mm/shadow/multi.c >>> @@ -3225,7 +3225,7 @@ static int sh_page_fault(struct vcpu *v, >>> >>> /* Need to hand off device-model MMIO to the device model */ >>> if ( p2mt == p2m_mmio_dm >>> - || (p2mt == p2m_mmio_write_dm && ft == ft_demand_write) ) >>> + || (p2mt == p2m_ioreq_server && ft == ft_demand_write) ) >>> { >>> gpa = guest_walk_to_gpa(&gw); >>> goto mmio; >>> diff --git a/xen/include/asm-x86/p2m.h b/xen/include/asm-x86/p2m.h >>> index 5392eb0..ee2ea9c 100644 >>> --- a/xen/include/asm-x86/p2m.h >>> +++ b/xen/include/asm-x86/p2m.h >>> @@ -71,7 +71,7 @@ typedef enum { >>> p2m_ram_shared = 12, /* Shared or sharable memory */ >>> p2m_ram_broken = 13, /* Broken page, access cause domain crash >> */ >>> p2m_map_foreign = 14, /* ram pages from foreign domain */ >>> - p2m_mmio_write_dm = 15, /* Read-only; writes go to the device >> model */ >>> + p2m_ioreq_server = 15, >>> } p2m_type_t; >>> >>> /* Modifiers to the query */ >>> @@ -112,7 +112,7 @@ typedef unsigned int p2m_query_t; >>> | p2m_to_mask(p2m_ram_ro) \ >>> | p2m_to_mask(p2m_grant_map_ro) \ >>> | p2m_to_mask(p2m_ram_shared) \ >>> - | p2m_to_mask(p2m_mmio_write_dm)) >>> + | p2m_to_mask(p2m_ioreq_server)) >>> >>> /* Write-discard types, which should discard the write operations */ >>> #define P2M_DISCARD_WRITE_TYPES (p2m_to_mask(p2m_ram_ro) \ >>> diff --git a/xen/include/public/hvm/hvm_op.h >> b/xen/include/public/hvm/hvm_op.h >>> index 1606185..b3e45cf 100644 >>> --- a/xen/include/public/hvm/hvm_op.h >>> +++ b/xen/include/public/hvm/hvm_op.h >>> @@ -83,7 +83,13 @@ typedef enum { >>> HVMMEM_ram_rw, /* Normal read/write guest RAM */ >>> HVMMEM_ram_ro, /* Read-only; writes are discarded */ >>> HVMMEM_mmio_dm, /* Reads and write go to the device model >> */ >>> - HVMMEM_mmio_write_dm /* Read-only; writes go to the device >> model */ >>> +#if __XEN_INTERFACE_VERSION__ < 0x00040700 >>> + HVMMEM_mmio_write_dm, /* Read-only; writes go to the device >> model */ >>> +#else >>> + HVMMEM_unused, /* Placeholder; setting memory to this type >>> + will fail for code after 4.7.0 */ >>> +#endif >>> + HVMMEM_ioreq_server >> >> Also, I don't think we've had a convincing argument for why this patch >> needs to be in 4.7. The p2m name changes are internal only, and so >> don't need to be made at all; and the old functionality will work as >> well as it ever did. Furthermore, the whole reason we're in this >> situation is that we checked in a publicly-visible change to the >> interface before it was properly ready; I think we should avoid making >> the same mistake this time. >> >> So personally I'd just leave this patch entirely for 4.8; but if Paul >> and/or Jan have strong opinions, then I would say check in only a >> patch to do the #if/#else/#endif, and leave both the p2m type changes >> and the new HVMMEM_ioreq_server enum for when the 4.8 development >> window opens. >> > > If the whole series is going in then I think this patch is ok. If this the > only patch that is going in for 4.7 then I thing we need the patch to > hvm_op.h to deprecate the old type and that's it. We definitely should not > introduce an implementation of the type HVMMEM_ioreq_server that has the same > hardcoded semantics as the old type and then change it. > The p2m type changes are also wrong. That type needs to be left alone, > presumably, so that anything using HVMMEM_mmio_write_dm and compiled to the > old interface version continues to function. I think HVMMEM_ioreq_server > needs to map to a new p2m type which should be introduced in patch #3. Well yes, if the whole series is going in the patch is OK; but I assumed that since it's a new feature that missed the hard deadline we were at this point only talking about how to fix up the interface for the 4.7 release. I think for 4.8 it should return -EINVAL until someone complains that it's not working, but that's something we can discuss when the development window opens. Thanks -George _______________________________________________ Xen-devel mailing list Xen-devel@xxxxxxxxxxxxx http://lists.xen.org/xen-devel
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |