[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] [Xen-changelog] [xen master] x86/HVM: latch linear->phys translation results
commit 67fc274bbec51a99c762aa1fb6c3de661032aa8d Author: Jan Beulich <jbeulich@xxxxxxxx> AuthorDate: Tue Jun 14 15:09:51 2016 +0200 Commit: Jan Beulich <jbeulich@xxxxxxxx> CommitDate: Tue Jun 14 15:09:51 2016 +0200 x86/HVM: latch linear->phys translation results ... to avoid re-doing the same translation later again (in a retry, for example). This doesn't help very often according to my testing, but it's pretty cheap to have, and will be of further use subsequently. Signed-off-by: Jan Beulich <jbeulich@xxxxxxxx> Reviewed-by: Paul Durrant <paul.durrant@xxxxxxxxxx> Reviewed-by: Andrew Cooper <andrew.cooper3@xxxxxxxxxx> --- xen/arch/x86/hvm/emulate.c | 15 +++++++++++++++ 1 file changed, 15 insertions(+) diff --git a/xen/arch/x86/hvm/emulate.c b/xen/arch/x86/hvm/emulate.c index b9cac8e..8a033a3 100644 --- a/xen/arch/x86/hvm/emulate.c +++ b/xen/arch/x86/hvm/emulate.c @@ -678,6 +678,19 @@ static struct hvm_mmio_cache *hvmemul_find_mmio_cache( return cache; } +static void latch_linear_to_phys(struct hvm_vcpu_io *vio, unsigned long gla, + unsigned long gpa, bool_t write) +{ + if ( vio->mmio_access.gla_valid ) + return; + + vio->mmio_gva = gla & PAGE_MASK; + vio->mmio_gpfn = PFN_DOWN(gpa); + vio->mmio_access = (struct npfec){ .gla_valid = 1, + .read_access = 1, + .write_access = write }; +} + static int hvmemul_linear_mmio_access( unsigned long gla, unsigned int size, uint8_t dir, void *buffer, uint32_t pfec, struct hvm_emulate_ctxt *hvmemul_ctxt, bool_t known_gpfn) @@ -703,6 +716,8 @@ static int hvmemul_linear_mmio_access( hvmemul_ctxt); if ( rc != X86EMUL_OKAY ) return rc; + + latch_linear_to_phys(vio, gla, gpa, dir == IOREQ_WRITE); } for ( ;; ) -- generated by git-patchbot for /home/xen/git/xen.git#master _______________________________________________ Xen-changelog mailing list Xen-changelog@xxxxxxxxxxxxx http://lists.xensource.com/xen-changelog
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |