[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] [Xen-devel] RE: [Qemu-devel] [PATCH v2 4/5] exec.c: refactor cpu_physical_memory_map
> -----Original Message----- > From: Alexander Graf [mailto:agraf@xxxxxxx] > Sent: Friday, July 22, 2011 2:00 PM > To: Liu Yu-B13201 > Cc: stefano.stabellini@xxxxxxxxxxxxx; qemu-devel@xxxxxxxxxx; > xen-devel@xxxxxxxxxxxxxxxxxxx; Yoder Stuart-B08248 > Subject: Re: [Qemu-devel] [PATCH v2 4/5] exec.c: refactor > cpu_physical_memory_map > > > On 22.07.2011, at 07:42, Liu Yu-B13201 wrote: > > > > > > >> -----Original Message----- > >> From: qemu-devel-bounces+yu.liu=freescale.com@xxxxxxxxxx > >> [mailto:qemu-devel-bounces+yu.liu=freescale.com@xxxxxxxxxx] > >> On Behalf Of stefano.stabellini@xxxxxxxxxxxxx > >> Sent: Friday, May 20, 2011 1:36 AM > >> To: qemu-devel@xxxxxxxxxx > >> Cc: xen-devel@xxxxxxxxxxxxxxxxxxx; agraf@xxxxxxx; Stefano > Stabellini > >> Subject: [Qemu-devel] [PATCH v2 4/5] exec.c: refactor > >> cpu_physical_memory_map > >> > >> From: Stefano Stabellini <stefano.stabellini@xxxxxxxxxxxxx> > >> > >> Introduce qemu_ram_ptr_length that takes an address and a size as > >> parameters rather than just an address. > >> > >> Refactor cpu_physical_memory_map so that we call > >> qemu_ram_ptr_length only > >> once rather than calling qemu_get_ram_ptr one time per page. > >> This is not only more efficient but also tries to simplify > >> the logic of > >> the function. > >> Currently we are relying on the fact that all the pages are mapped > >> contiguously in qemu's address space: we have a check to make > >> sure that > >> the virtual address returned by qemu_get_ram_ptr from the > >> second call on > >> is consecutive. Now we are making this more explicit > replacing all the > >> calls to qemu_get_ram_ptr with a single call to qemu_ram_ptr_length > >> passing a size argument. > >> > >> Signed-off-by: Stefano Stabellini > <stefano.stabellini@xxxxxxxxxxxxx> > >> CC: agraf@xxxxxxx > >> CC: anthony@xxxxxxxxxxxxx > >> --- > >> cpu-common.h | 1 + > >> exec.c | 51 > >> ++++++++++++++++++++++++++++++++++----------------- > >> 2 files changed, 35 insertions(+), 17 deletions(-) > >> > >> diff --git a/cpu-common.h b/cpu-common.h > >> index 151c32c..085aacb 100644 > >> --- a/cpu-common.h > >> +++ b/cpu-common.h > >> @@ -64,6 +64,7 @@ void qemu_ram_free(ram_addr_t addr); > >> void qemu_ram_remap(ram_addr_t addr, ram_addr_t length); > >> /* This should only be used for ram local to a device. */ > >> void *qemu_get_ram_ptr(ram_addr_t addr); > >> +void *qemu_ram_ptr_length(target_phys_addr_t addr, > >> target_phys_addr_t *size); > >> /* Same but slower, to use for migration, where the order of > >> * RAMBlocks must not change. */ > >> void *qemu_safe_ram_ptr(ram_addr_t addr); > >> diff --git a/exec.c b/exec.c > >> index 21f21f0..ff9c174 100644 > >> --- a/exec.c > >> +++ b/exec.c > >> @@ -3111,6 +3111,31 @@ void *qemu_safe_ram_ptr(ram_addr_t addr) > >> return NULL; > >> } > >> > >> +/* Return a host pointer to guest's ram. Similar to > qemu_get_ram_ptr > >> + * but takes a size argument */ > >> +void *qemu_ram_ptr_length(target_phys_addr_t addr, > >> target_phys_addr_t *size) > >> +{ > >> + if (xen_mapcache_enabled()) > >> + return qemu_map_cache(addr, *size, 1); > >> + else { > >> + RAMBlock *block; > >> + > >> + QLIST_FOREACH(block, &ram_list.blocks, next) { > >> + if (addr - block->offset < block->length) { > >> + if (addr - block->offset + *size > block->length) > >> + *size = block->length - addr + block->offset; > >> + return block->host + (addr - block->offset); > >> + } > >> + } > >> + > >> + fprintf(stderr, "Bad ram offset %" PRIx64 "\n", > >> (uint64_t)addr); > >> + abort(); > >> + > >> + *size = 0; > >> + return NULL; > >> + } > >> +} > >> + > >> void qemu_put_ram_ptr(void *addr) > >> { > >> trace_qemu_put_ram_ptr(addr); > >> @@ -3972,14 +3997,12 @@ void > >> *cpu_physical_memory_map(target_phys_addr_t addr, > >> int is_write) > >> { > >> target_phys_addr_t len = *plen; > >> - target_phys_addr_t done = 0; > >> + target_phys_addr_t todo = 0; > >> int l; > >> - uint8_t *ret = NULL; > >> - uint8_t *ptr; > >> target_phys_addr_t page; > >> unsigned long pd; > >> PhysPageDesc *p; > >> - unsigned long addr1; > >> + target_phys_addr_t addr1 = addr; > >> > >> while (len > 0) { > >> page = addr & TARGET_PAGE_MASK; > >> @@ -3994,7 +4017,7 @@ void > >> *cpu_physical_memory_map(target_phys_addr_t addr, > >> } > >> > >> if ((pd & ~TARGET_PAGE_MASK) != IO_MEM_RAM) { > >> - if (done || bounce.buffer) { > >> + if (todo || bounce.buffer) { > >> break; > >> } > >> bounce.buffer = qemu_memalign(TARGET_PAGE_SIZE, > >> TARGET_PAGE_SIZE); > >> @@ -4003,23 +4026,17 @@ void > >> *cpu_physical_memory_map(target_phys_addr_t addr, > >> if (!is_write) { > >> cpu_physical_memory_read(addr, bounce.buffer, l); > >> } > >> - ptr = bounce.buffer; > >> - } else { > >> - addr1 = (pd & TARGET_PAGE_MASK) + (addr & > >> ~TARGET_PAGE_MASK); > >> - ptr = qemu_get_ram_ptr(addr1); > >> - } > >> - if (!done) { > >> - ret = ptr; > >> - } else if (ret + done != ptr) { > >> - break; > >> + > >> + *plen = l; > >> + return bounce.buffer; > >> } > >> > >> len -= l; > >> addr += l; > >> - done += l; > >> + todo += l; > >> } > >> - *plen = done; > >> - return ret; > >> + *plen = todo; > >> + return qemu_ram_ptr_length(addr1, plen); > >> } > >> > >> /* Unmaps a memory region previously mapped by > >> cpu_physical_memory_map(). > >> -- > >> 1.7.2.3 > > > > Hello Stefano, > > > > This commit breaks the case that guest memory doesn't start from 0. > > > > In previous code > > addr1 = (pd & TARGET_PAGE_MASK) + (addr & ~TARGET_PAGE_MASK); > > This transfer guest physical addr to qemu ram_addr, and so > that it can pass the ram range checking. > > > > But current code > > addr1 = addr > > this make it fail to pass the ram range checking. > > Are you sure it's still broken with commit > 8ab934f93b5ad3d0af4ad419d2531235a75d672c? If so, mind to > pinpoint where exactly? > Ah, miss this one. Yes, it then works with commit 8ab934f93b5ad3d0af4ad419d2531235a75d672c. Sorry, please ignore this. Thanks, Yu _______________________________________________ Xen-devel mailing list Xen-devel@xxxxxxxxxxxxxxxxxxx http://lists.xensource.com/xen-devel
|
![]() |
Lists.xenproject.org is hosted with RackSpace, monitoring our |