# HG changeset patch # User yamahata@xxxxxxxxxxxxx # Date 1170415314 -32400 # Node ID 4466d95ea07b0842223e55df6ab06f4351e1ba70 # Parent 74ada22b59e8a96b05415b89810367f44fcfd884 Use the guest's own p2m table instead of xc_get_pfn_list(), which cannot handle PFNs with no MFN. Dump a zeroed page for PFNs with no MFN. Clearly deprecate xc_get_pfn_list(). Do not include a P2M table with HVM domains. Refuse to dump HVM until we can map its pages with PFNs. Signed-off-by: John Levon ELF formatified with note section. added PFN-GMFN table for non-auto translated physmap added PFN table for auto translated physmap. HVM domain support. IA64 support. PATCHNAME: xm_dump_core_elf Signed-off-by: Isaku Yamahata diff -r 74ada22b59e8 -r 4466d95ea07b tools/libxc/xc_core.c --- a/tools/libxc/xc_core.c Mon Jan 29 14:07:26 2007 +0900 +++ b/tools/libxc/xc_core.c Fri Feb 02 20:21:54 2007 +0900 @@ -1,10 +1,56 @@ +/* + * Elf format, (pfn, gmfn) table, IA64 support. + * Copyright (c) 2007 Isaku Yamahata + * VA Linux Systems Japan K.K. + * + * xen dump-core file format follows ELF format specification. + * Analisys tools shouldn't depends on the order of sections. + * They should follow elf header and check section names. + * + * +--------------------------------------------------------+ + * |ELF header | + * +--------------------------------------------------------+ + * |section headers | + * | null section header | + * | .shstrtab | + * | .note.Xen | + * | .Xen.p2m or .Xen.pfn | + * | .Xen.pages | + * +--------------------------------------------------------+ + * |.note.Xen:note section | + * | "Xen" is used as note name, | + * | types are defined in xen/include/public/elfnote.h | + * | and descriptors are defined in xc_core.h. | + * | dumpcore none | + * | dumpcore header | + * | dumpcore xen version | + * | dumpcore format version | + * | dumpcore prstatus | + * | vcpu_guest_context_t[nr_vcpus] | + * +--------------------------------------------------------+ + * |.Xen.shared_info if possible | + * +--------------------------------------------------------+ + * |.Xen.p2m or .Xen.pfn | + * | .Xen.p2m: struct p2m[nr_pages] | + * | .Xen.pfn: uint64_t[nr_pages] | + * +--------------------------------------------------------+ + * |.Xen.pages | + * | page * nr_pages | + * +--------------------------------------------------------+ + * |.shstrtab: section header string table | + * +--------------------------------------------------------+ + * + */ + #include "xg_private.h" +#include "xc_elf.h" +#include "xc_dom.h" +#include "xc_core.h" #include #include /* number of pages to write at a time */ #define DUMP_INCREMENT (4 * 1024) -#define round_pgup(_p) (((_p)+(PAGE_SIZE-1))&PAGE_MASK) static int copy_from_domain_page(int xc_handle, @@ -21,107 +67,990 @@ copy_from_domain_page(int xc_handle, return 0; } +struct memory_map_entry { + uint64_t addr; + uint64_t size; +}; +typedef struct memory_map_entry memory_map_entry_t; + +#if defined(__i386__) || defined(__x86_64__) +#define ELF_ARCH_DATA ELFDATA2LSB +#if defined (__i386__) +# define ELF_ARCH_MACHINE EM_386 +#else +# define ELF_ARCH_MACHINE EM_X86_64 +#endif + +static int +is_auto_translated_physmap(const xc_dominfo_t *info) +{ + if ( info->hvm ) + return 1; + return 0; +} + +static int +memory_map_get(int xc_handle, xc_dominfo_t *info, shared_info_t *live_shinfo, + memory_map_entry_t **mapp, unsigned int *nr_entries) +{ + unsigned long max_pfn = live_shinfo->arch.max_pfn; + memory_map_entry_t *map = NULL; + + map = malloc(sizeof(*map)); + if ( !map ) + { + PERROR("Could not allocate memory"); + goto out; + } + + map->addr = 0; + map->size = max_pfn << PAGE_SHIFT; + + *mapp = map; + *nr_entries = 1; + return 0; + +out: + if ( map ) + free(map); + return -1; +} + +static int +map_p2m(int xc_handle, xc_dominfo_t *info, shared_info_t *live_shinfo, + xen_pfn_t **live_p2m, unsigned long *pfnp) +{ + /* Double and single indirect references to the live P2M table */ + xen_pfn_t *live_p2m_frame_list_list = NULL; + xen_pfn_t *live_p2m_frame_list = NULL; + uint32_t dom = info->domid; + unsigned long max_pfn = live_shinfo->arch.max_pfn; + int ret = -1; + int err; + + if ( max_pfn < info->nr_pages ) + { + ERROR("max_pfn < nr_pages -1 (%lx < %lx", max_pfn, info->nr_pages - 1); + goto out; + } + + live_p2m_frame_list_list = + xc_map_foreign_range(xc_handle, dom, PAGE_SIZE, PROT_READ, + live_shinfo->arch.pfn_to_mfn_frame_list_list); + + if ( !live_p2m_frame_list_list ) + { + PERROR("Couldn't map p2m_frame_list_list (errno %d)", errno); + goto out; + } + + live_p2m_frame_list = + xc_map_foreign_batch(xc_handle, dom, PROT_READ, + live_p2m_frame_list_list, + P2M_FLL_ENTRIES); + + if ( !live_p2m_frame_list ) + { + PERROR("Couldn't map p2m_frame_list"); + goto out; + } + + *live_p2m = xc_map_foreign_batch(xc_handle, dom, PROT_READ, + live_p2m_frame_list, + P2M_FL_ENTRIES); + + if ( !live_p2m ) + { + PERROR("Couldn't map p2m table"); + goto out; + } + + *pfnp = max_pfn; + + ret = 0; + +out: + err = errno; + + if ( live_p2m_frame_list_list ) + munmap(live_p2m_frame_list_list, PAGE_SIZE); + + if ( live_p2m_frame_list ) + munmap(live_p2m_frame_list, P2M_FLL_ENTRIES * PAGE_SIZE); + + errno = err; + return ret; +} +#elif defined (__ia64__) +#define ELF_ARCH_DATA ELFDATA2LSB +#define ELF_ARCH_MACHINE EM_IA_64 + +#include "xc_efi.h" + +static int +is_auto_translated_physmap(const xc_dominfo_t *info) +{ + /* + * on ia64, both paravirtualize domain and hvm domain are + * auto_translated_physmap mode + */ + return 1; +} + +/* see setup_guest() @ xc_linux_build.c */ +static int +memory_map_get_old_domu(int xc_handle, xc_dominfo_t *info, + shared_info_t *live_shinfo, + memory_map_entry_t **mapp, unsigned int *nr_entries) +{ + memory_map_entry_t *map = NULL; + + map = malloc(sizeof(*map)); + if ( map == NULL ) + { + PERROR("Could not allocate memory"); + goto out; + } + + map->addr = 0; + map->size = info->max_memkb * 1024; + + *mapp = map; + *nr_entries = 1; + return 0; + +out: + if ( map != NULL ) + free(map); + return -1; +} + +/* see setup_guest() @ xc_ia64_hvm_build.c */ +static int +memory_map_get_old_hvm(int xc_handle, xc_dominfo_t *info, + shared_info_t *live_shinfo, + memory_map_entry_t **mapp, unsigned int *nr_entries) +{ + const memory_map_entry_t gfw_map[] = { + {IO_PAGE_START, IO_PAGE_SIZE}, + {STORE_PAGE_START, STORE_PAGE_SIZE}, + {BUFFER_IO_PAGE_START, BUFFER_IO_PAGE_SIZE}, + {GFW_START, GFW_SIZE}, + }; + const unsigned int nr_gfw_map = sizeof(gfw_map)/sizeof(gfw_map[0]); + memory_map_entry_t *map = NULL; + unsigned int i; + +#define VGA_IO_END (VGA_IO_START + VGA_IO_SIZE) + /* [0, VGA_IO_START) [VGA_IO_END, 3GB), [4GB, ...) + gfw_map */ + map = malloc((3 + nr_gfw_map) * sizeof(*map)); + if ( map == NULL ) + { + PERROR("Could not allocate memory"); + goto out; + } + + for ( i = 0; i < nr_gfw_map; i++ ) + map[i] = gfw_map[i]; + map[i].addr = 0; + map[i].size = info->max_memkb * 1024; + i++; + if ( map[i - 1].size < VGA_IO_END ) + { + map[i - 1].size = VGA_IO_START; + } + else + { + map[i].addr = VGA_IO_END; + map[i].size = map[i - 1].size - VGA_IO_END; + map[i - 1].size = VGA_IO_START; + i++; + if ( map[i - 1].addr + map[i - 1].size > MMIO_START ) + { + map[i].addr = MMIO_START + 1 * MEM_G; + map[i].size = map[i - 1].addr + map[i - 1].size - MMIO_START; + map[i - 1].size = MMIO_START - map[i - 1].addr; + i++; + } + } + *mapp = map; + *nr_entries = i; + return 0; + +out: + if ( map != NULL ) + free(map); + return -1; +} + +static int +memory_map_get_old(int xc_handle, xc_dominfo_t *info, + shared_info_t *live_shinfo, + memory_map_entry_t **mapp, unsigned int *nr_entries) +{ + if ( info->hvm ) + return memory_map_get_old_hvm(xc_handle, info, live_shinfo, + mapp, nr_entries); + if ( live_shinfo == NULL ) + return -1; + return memory_map_get_old_domu(xc_handle, info, live_shinfo, + mapp, nr_entries); +} + +static int +memory_map_get(int xc_handle, xc_dominfo_t *info, shared_info_t *live_shinfo, + memory_map_entry_t **mapp, unsigned int *nr_entries) +{ +#ifdef notyet + int ret = -1; + xen_ia64_memmap_info_t *memmap_info; + memory_map_entry_t *map; + char *start; + char *end; + char *p; + efi_memory_desc_t *md; + + if ( live_shinfo == NULL || live_shinfo->arch.memmap_info_pfn == 0 ) + goto old; + + memmap_info = xc_map_foreign_range(xc_handle, info->domid, + PAGE_SIZE, PROT_READ, + live_shinfo->arch.memmap_info_pfn); + if ( memmap_info == NULL ) + { + PERROR("Could not map memmap info."); + return -1; + } + if ( memmap_info->efi_memdesc_size != sizeof(*md) || + (memmap_info->efi_memmap_size / memmap_info->efi_memdesc_size) == 0 || + memmap_info->efi_memmap_size > PAGE_SIZE - sizeof(memmap_info) || + memmap_info->efi_memdesc_version != EFI_MEMORY_DESCRIPTOR_VERSION ) + { + PERROR("unknown memmap header. defaulting to compat mode."); + munmap(memmap_info, PAGE_SIZE); + goto old; + } + + *nr_entries = memmap_info->efi_memmap_size / memmap_info->efi_memdesc_size; + map = malloc(*nr_entries * sizeof(*md)); + if ( map == NULL ) + { + PERROR("Could not allocate memory for memmap."); + goto out; + } + *mapp = map; + + *nr_entries = 0; + start = (char*)&memmap_info->memdesc; + end = start + memmap_info->efi_memmap_size; + for ( p = start; p < end; p += memmap_info->efi_memdesc_size ) + { + md = (efi_memory_desc_t*)p; + if ( md->type != EFI_CONVENTIONAL_MEMORY || + md->attribute != EFI_MEMORY_WB || + md->num_pages == 0 ) + continue; + + map[*nr_entries].addr = md->phys_addr; + map[*nr_entries].size = md->num_pages << EFI_PAGE_SHIFT; + (*nr_entries)++; + } + ret = 0; +out: + munmap(memmap_info, PAGE_SIZE); + return ret; + +old: +#endif /* notyet */ + return memory_map_get_old(xc_handle, info, live_shinfo, mapp, nr_entries); +} + +static int +map_p2m(int xc_handle, xc_dominfo_t *info, shared_info_t *live_shinfo, + xen_pfn_t **live_p2m, unsigned long *pfnp) +{ + /* + * on ia64, both paravirtualize domain and hvm domain are + * auto_translated_physmap mode + */ + errno = ENOSYS; + return -1; +} +#else +# error "unsupported architecture" +#endif + +#ifndef ELF_CORE_EFLAGS +#define ELF_CORE_EFLAGS 0 +#endif + +/* string table */ +struct strtab { + char *strings; + uint16_t current; + uint16_t max; +}; + +static struct strtab* +strtab_init(void) +{ + struct strtab *strtab; + char *strings; + strtab = malloc(sizeof(strtab)); + if ( strtab == NULL ) + return NULL; + + strings = malloc(PAGE_SIZE); + if ( strings == NULL ) + { + PERROR("Could not allocate string table init"); + free(strtab); + return NULL; + } + strtab->strings = strings; + strtab->max = PAGE_SIZE; + + /* index 0 represents none */ + strtab->strings[0] = '\0'; + strtab->current = 1; + + return strtab; +} + +static void +strtab_free(struct strtab *strtab) +{ + free(strtab->strings); + free(strtab); +} + +static uint16_t +strtab_get(struct strtab *strtab, const char *name) +{ + uint16_t ret = 0; + uint16_t len = strlen(name) + 1; + + if ( strtab->current + len > strtab->max ) + { + char *tmp; + if ( strtab->max * 2 < strtab->max ) + { + PERROR("too long string table"); + errno = ENOMEM; + return ret; + } + + + tmp = realloc(strtab->strings, strtab->max * 2); + if ( tmp == NULL ) + { + PERROR("Could not allocate string table"); + return ret; + } + + strtab->strings = tmp; + strtab->max *= 2; + } + + ret = strtab->current; + strcpy(strtab->strings + strtab->current, name); + strtab->current += len; + return ret; +} + + +/* section headers */ +struct section_headers { + uint16_t num; + uint16_t num_max; + + Elf_Shdr *shdrs; +}; +#define SHDR_INIT 5 /* Currently the following 5 section is used + * null section + * .note.Xen, + * .Xen.p2m or .Xen.pfn, + * .Xen.pages + * .shstrtab, + */ +#define SHDR_INC 4 + +static struct section_headers* +shdr_init(void) +{ + struct section_headers *sheaders; + sheaders = malloc(sizeof(*sheaders)); + if ( sheaders == NULL ) + return NULL; + + sheaders->num = 0; + sheaders->num_max = SHDR_INIT; + sheaders->shdrs = malloc(sizeof(sheaders->shdrs[0]) * sheaders->num_max); + if ( sheaders->shdrs == NULL ) + { + free(sheaders); + return NULL; + } + return sheaders; +} + +static void +shdr_free(struct section_headers *sheaders) +{ + free(sheaders->shdrs); + free(sheaders); +} + +static Elf_Shdr* +shdr_get(struct section_headers *sheaders) +{ + Elf_Shdr *shdr; + + if ( sheaders->num == sheaders->num_max ) + { + Elf_Shdr *shdrs; + if ( sheaders->num_max + SHDR_INC < sheaders->num_max ) + { + errno = E2BIG; + return NULL; + } + sheaders->num_max += SHDR_INC; + shdrs = realloc(sheaders->shdrs, + sizeof(sheaders->shdrs[0]) * sheaders->num_max); + if ( shdrs == NULL ) + return NULL; + sheaders->shdrs = shdrs; + } + + shdr = &sheaders->shdrs[sheaders->num]; + sheaders->num++; + memset(shdr, 0, sizeof(*shdr)); + return shdr; +} + +static int +shdr_set(Elf_Shdr *shdr, + struct strtab *strtab, const char *name, uint32_t type, + uint64_t offset, uint64_t size, uint64_t addralign, uint64_t entsize) +{ + uint64_t name_idx = strtab_get(strtab, name); + if ( name_idx == 0 ) + return -1; + + shdr->sh_name = name_idx; + shdr->sh_type = type; + shdr->sh_offset = offset; + shdr->sh_size = size; + shdr->sh_addralign = addralign; + shdr->sh_entsize = entsize; + return 0; +} + +static int +elfnote_fill_xen_version(int xc_handle, + struct xen_elfnote_dumpcore_xen_version_desc + *xen_version) +{ + int rc; + memset(xen_version, 0, sizeof(*xen_version)); + + rc = xc_version(xc_handle, XENVER_version, NULL); + if ( rc < 0 ) + return rc; + xen_version->major_version = rc >> 16; + xen_version->minor_version = rc & ((1 << 16) - 1); + + rc = xc_version(xc_handle, XENVER_extraversion, + &xen_version->extra_version); + if ( rc < 0 ) + return rc; + + rc = xc_version(xc_handle, XENVER_compile_info, + &xen_version->compile_info); + if ( rc < 0 ) + return rc; + + rc = xc_version(xc_handle, XENVER_changeset, &xen_version->changeset); + if ( rc < 0 ) + return rc; + + rc = xc_version(xc_handle, XENVER_pagesize, NULL); + if ( rc < 0 ) + return rc; + xen_version->pagesize = rc; + + return 0; +} + +static int +elfnote_fill_format_version(struct xen_elfnote_dumpcore_format_version_desc + *format_version) +{ + format_version->major = XEN_DUMPCORE_FORMAT_MAJOR_VERSION; + format_version->minor = XEN_DUMPCORE_FORMAT_MINOR_VERSION; + format_version->extra = XEN_DUMPCORE_FORMAT_EXTRA_VERSION; + return 0; +} + int xc_domain_dumpcore_via_callback(int xc_handle, uint32_t domid, void *args, dumpcore_rtn_t dump_rtn) { - unsigned long nr_pages; - uint64_t *page_array = NULL; xc_dominfo_t info; - int i, nr_vcpus = 0; + shared_info_t *live_shinfo = NULL; + + int nr_vcpus = 0; char *dump_mem, *dump_mem_start = NULL; - struct xc_core_header header; vcpu_guest_context_t ctxt[MAX_VIRT_CPUS]; char dummy[PAGE_SIZE]; int dummy_len; - int sts; + int sts = -1; + + unsigned long i; + unsigned long j; + unsigned long nr_pages; + + memory_map_entry_t *memory_map = NULL; + unsigned int nr_memory_map; + unsigned int map_idx; + + int auto_translated_physmap; + xen_pfn_t *p2m = NULL; + unsigned long max_pfn = 0; + struct p2m *p2m_array = NULL; + + uint64_t *pfn_array = NULL; + + Elf_Ehdr ehdr; + unsigned long filesz; + unsigned long offset; + unsigned long fixup; + + struct strtab *strtab = NULL; + uint16_t strtab_idx; + struct section_headers *sheaders = NULL; + Elf_Shdr *shdr; + + /* elf notes */ + struct xen_elfnote elfnote; + struct xen_elfnote_dumpcore_none_desc none; + struct xen_elfnote_dumpcore_header_desc header; + struct xen_elfnote_dumpcore_xen_version_desc xen_version; + struct xen_elfnote_dumpcore_format_version_desc format_version; if ( (dump_mem_start = malloc(DUMP_INCREMENT*PAGE_SIZE)) == NULL ) { PERROR("Could not allocate dump_mem"); - goto error_out; + goto out; } if ( xc_domain_getinfo(xc_handle, domid, 1, &info) != 1 ) { PERROR("Could not get info for domain"); - goto error_out; - } + goto out; + } + /* Map the shared info frame */ + live_shinfo = xc_map_foreign_range(xc_handle, domid, PAGE_SIZE, + PROT_READ, info.shared_info_frame); + if ( !live_shinfo +#ifdef __ia64__ + && !info.hvm +#endif + ) + { + PERROR("Couldn't map live_shinfo"); + goto out; + } + auto_translated_physmap = is_auto_translated_physmap(&info); if ( domid != info.domid ) { PERROR("Domain %d does not exist", domid); - goto error_out; + goto out; } for ( i = 0; i <= info.max_vcpu_id; i++ ) - if ( xc_vcpu_getcontext(xc_handle, domid, i, &ctxt[nr_vcpus]) == 0) + if ( xc_vcpu_getcontext(xc_handle, domid, i, &ctxt[nr_vcpus]) == 0 ) nr_vcpus++; + if ( nr_vcpus == 0 ) + { + PERROR("No VCPU context could be grabbed"); + goto out; + } + + /* obtain memory map */ + sts = memory_map_get(xc_handle, &info, live_shinfo, + &memory_map, &nr_memory_map); + if ( sts != 0 ) + goto out; nr_pages = info.nr_pages; - + if ( !auto_translated_physmap ) + { + /* obtain p2m table */ + p2m_array = malloc(nr_pages * sizeof(struct p2m)); + if ( p2m_array == NULL ) + { + PERROR("Could not allocate p2m array"); + goto out; + } + + sts = map_p2m(xc_handle, &info, live_shinfo, &p2m, &max_pfn); + if ( sts != 0 ) + goto out; + } + else + { + pfn_array = malloc(nr_pages * sizeof(pfn_array[0])); + if ( pfn_array == NULL ) + { + PERROR("Could not allocate pfn array"); + goto out; + } + } + + /* create .Xen.p2m or .Xen.pfn */ + j = 0; + for ( map_idx = 0; map_idx < nr_memory_map; map_idx++ ) + { + uint64_t pfn_start; + uint64_t pfn_end; + + pfn_start = memory_map[map_idx].addr >> PAGE_SHIFT; + pfn_end = pfn_start + (memory_map[map_idx].size >> PAGE_SHIFT); + for ( i = pfn_start; i < pfn_end; i++ ) + { + if ( !auto_translated_physmap ) + { + if ( p2m[i] == INVALID_P2M_ENTRY ) + continue; + p2m_array[j].pfn = i; + p2m_array[j].gmfn = p2m[i]; + } + else + { + /* try to map page to determin wheter it has underlying page */ + void *vaddr = xc_map_foreign_range(xc_handle, domid, + PAGE_SIZE, PROT_READ, i); + if ( vaddr == NULL ) + continue; + munmap(vaddr, PAGE_SIZE); + pfn_array[j] = i; + } + + j++; + } + } + if ( j != nr_pages ) + { + PERROR("j (%ld) != nr_pages (%ld)", j , nr_pages); + /* When live dump-mode (-L option) is specified, + * guest domain may change its mapping. + */ + nr_pages = j; + } + + memset(&ehdr, 0, sizeof(ehdr)); + ehdr.e_ident[EI_MAG0] = ELFMAG0; + ehdr.e_ident[EI_MAG1] = ELFMAG1; + ehdr.e_ident[EI_MAG2] = ELFMAG2; + ehdr.e_ident[EI_MAG3] = ELFMAG3; + ehdr.e_ident[EI_CLASS] = ELFCLASS; + ehdr.e_ident[EI_DATA] = ELF_ARCH_DATA; + ehdr.e_ident[EI_VERSION] = EV_CURRENT; + ehdr.e_ident[EI_OSABI] = ELFOSABI_SYSV; + ehdr.e_ident[EI_ABIVERSION] = EV_CURRENT; + + ehdr.e_type = ET_CORE; + ehdr.e_machine = ELF_ARCH_MACHINE; + ehdr.e_version = EV_CURRENT; + ehdr.e_entry = 0; + ehdr.e_phoff = 0; + ehdr.e_shoff = sizeof(ehdr); + ehdr.e_flags = ELF_CORE_EFLAGS; + ehdr.e_ehsize = sizeof(ehdr); + ehdr.e_phentsize = sizeof(Elf_Phdr); + ehdr.e_phnum = 0; + ehdr.e_shentsize = sizeof(Elf_Shdr); + /* ehdr.e_shnum and ehdr.e_shstrndx aren't known here yet. fill it later*/ + + /* create section header */ + strtab = strtab_init(); + if ( strtab == NULL ) + { + PERROR("Could not allocate string table"); + goto out; + } + sheaders = shdr_init(); + if ( sheaders == NULL ) + { + PERROR("Could not allocate section headers"); + goto out; + } + /* null section */ + shdr = shdr_get(sheaders); + if ( shdr == NULL ) + { + PERROR("Could not get section header for null section"); + goto out; + } + + /* .shstrtab */ + shdr = shdr_get(sheaders); + if ( shdr == NULL ) + { + PERROR("Could not get section header for shstrtab"); + goto out; + } + strtab_idx = shdr - sheaders->shdrs; + /* strtab_shdr.sh_offset, strtab_shdr.sh_size aren't unknown. + * fill it later + */ + sts = shdr_set(shdr, strtab, ELF_SHSTRTAB, SHT_STRTAB, 0, 0, 0, 0); + if ( sts != 0 ) + goto out; + + /* elf note section */ + /* here the number of section header is unknown. fix up offset later. */ + offset = sizeof(ehdr); + filesz = + sizeof(struct xen_elfnote_dumpcore_none) + /* none */ + sizeof(struct xen_elfnote_dumpcore_header) + /* core header */ + sizeof(struct xen_elfnote_dumpcore_xen_version) + /* xen version */ + sizeof(struct xen_elfnote_dumpcore_format_version) + /* format version */ + sizeof(struct xen_elfnote_dumpcore_prstatus) + sizeof(ctxt[0]) * nr_vcpus; /* vcpu context */ + shdr = shdr_get(sheaders); + if ( shdr == NULL ) + { + PERROR("Could not get section header for note section"); + goto out; + } + sts = shdr_set(shdr, strtab, ELF_SEC_XEN_NOTE, SHT_NOTE, offset, filesz, + 0, 0); + if ( sts != 0 ) + goto out; + offset += filesz; + + /* shared_info */ + if ( live_shinfo != NULL ) + { + shdr = shdr_get(sheaders); + if ( shdr == NULL ) + { + PERROR("Could not get section header for .Xen.shared_info"); + goto out; + } + filesz = PAGE_SIZE; + sts = shdr_set(shdr, strtab, ELF_SEC_XEN_SHARED_INFO, SHT_PROGBITS, + offset, filesz, PAGE_SIZE, PAGE_SIZE); + if ( sts != 0 ) + goto out; + offset += filesz; + } + + /* p2m/pfn table */ + shdr = shdr_get(sheaders); + if ( shdr == NULL ) + { + PERROR("Could not get section header for .Xen.{p2m, pfn} table"); + goto out; + } + if ( !auto_translated_physmap ) + { + filesz = nr_pages * sizeof(p2m_array[0]); + sts = shdr_set(shdr, strtab, ELF_SEC_XEN_P2M, SHT_PROGBITS, + offset, filesz, + __alignof__(p2m_array[0]), sizeof(p2m_array[0])); + if ( sts != 0 ) + goto out; + } + else + { + filesz = nr_pages * sizeof(pfn_array[0]); + sts = shdr_set(shdr, strtab, ELF_SEC_XEN_PFN, SHT_PROGBITS, + offset, filesz, + __alignof__(pfn_array[0]), sizeof(pfn_array[0])); + if ( sts != 0 ) + goto out; + } + offset += filesz; + + /* pages */ + shdr = shdr_get(sheaders); + if ( shdr == NULL ) + { + PERROR("could not get section headers for .Xen.pages"); + goto out; + } + + /* + * pages are the last section to allocate section headers + * so that we know the number of section headers here. + */ + fixup = sheaders->num * sizeof(*shdr); + /* zeroth section should have zero offset */ + for ( i = 1; i < sheaders->num; i++ ) + sheaders->shdrs[i].sh_offset += fixup; + offset += fixup; + dummy_len = ROUNDUP(offset, PAGE_SHIFT) - offset; /* padding length */ + offset += dummy_len; + + filesz = nr_pages * PAGE_SIZE; + sts = shdr_set(shdr, strtab, ELF_SEC_XEN_PAGES, SHT_PROGBITS, + offset, filesz, PAGE_SIZE, PAGE_SIZE); + if ( sts != 0 ) + goto out; + offset += filesz; + + /* fixing up section header string table section header */ + filesz = strtab->current; + sheaders->shdrs[strtab_idx].sh_offset = offset; + sheaders->shdrs[strtab_idx].sh_size = filesz; + + /* write out elf header */ + ehdr.e_shnum = sheaders->num; + ehdr.e_shstrndx = strtab_idx; + sts = dump_rtn(args, (char*)&ehdr, sizeof(ehdr)); + if ( sts != 0 ) + goto out; + + /* section headers */ + sts = dump_rtn(args, (char*)sheaders->shdrs, + sheaders->num * sizeof(sheaders->shdrs[0])); + if ( sts != 0 ) + goto out; + + /* elf note section */ + memset(&elfnote, 0, sizeof(elfnote)); + elfnote.namesz = strlen(XEN_ELFNOTE_NAME) + 1; + strncpy(elfnote.name, XEN_ELFNOTE_NAME, sizeof(elfnote.name)); + + /* elf note section:xen core header */ + elfnote.descsz = sizeof(none); + elfnote.type = XEN_ELFNOTE_DUMPCORE_NONE; + sts = dump_rtn(args, (char*)&elfnote, sizeof(elfnote)); + if ( sts != 0 ) + goto out; + sts = dump_rtn(args, (char*)&none, sizeof(none)); + if ( sts != 0 ) + goto out; + + /* elf note section:xen core header */ + elfnote.descsz = sizeof(header); + elfnote.type = XEN_ELFNOTE_DUMPCORE_HEADER; header.xch_magic = info.hvm ? XC_CORE_MAGIC_HVM : XC_CORE_MAGIC; header.xch_nr_vcpus = nr_vcpus; header.xch_nr_pages = nr_pages; - header.xch_ctxt_offset = sizeof(struct xc_core_header); - header.xch_index_offset = sizeof(struct xc_core_header) + - sizeof(vcpu_guest_context_t)*nr_vcpus; - dummy_len = (sizeof(struct xc_core_header) + - (sizeof(vcpu_guest_context_t) * nr_vcpus) + - (nr_pages * sizeof(*page_array))); - header.xch_pages_offset = round_pgup(dummy_len); - - sts = dump_rtn(args, (char *)&header, sizeof(struct xc_core_header)); - if ( sts != 0 ) - goto error_out; - + header.xch_page_size = PAGE_SIZE; + sts = dump_rtn(args, (char*)&elfnote, sizeof(elfnote)); + if ( sts != 0 ) + goto out; + sts = dump_rtn(args, (char*)&header, sizeof(header)); + if ( sts != 0 ) + goto out; + + /* elf note section: xen version */ + elfnote.descsz = sizeof(xen_version); + elfnote.type = XEN_ELFNOTE_DUMPCORE_XEN_VERSION; + elfnote_fill_xen_version(xc_handle, &xen_version); + sts = dump_rtn(args, (char*)&elfnote, sizeof(elfnote)); + if ( sts != 0 ) + goto out; + sts = dump_rtn(args, (char*)&xen_version, sizeof(xen_version)); + if ( sts != 0 ) + goto out; + + /* elf note section: format version */ + elfnote.descsz = sizeof(format_version); + elfnote.type = XEN_ELFNOTE_DUMPCORE_FORMAT_VERSION; + elfnote_fill_format_version(&format_version); + sts = dump_rtn(args, (char*)&elfnote, sizeof(elfnote)); + if ( sts != 0 ) + goto out; + sts = dump_rtn(args, (char*)&format_version, sizeof(format_version)); + if ( sts != 0 ) + goto out; + + /* note section:xen vcpu prstatus */ + elfnote.descsz = sizeof(ctxt[0]) * nr_vcpus; + elfnote.type = XEN_ELFNOTE_DUMPCORE_PRSTATUS; + sts = dump_rtn(args, (char*)&elfnote, sizeof(elfnote)); + if ( sts != 0 ) + goto out; sts = dump_rtn(args, (char *)&ctxt, sizeof(ctxt[0]) * nr_vcpus); if ( sts != 0 ) - goto error_out; - - if ( (page_array = malloc(nr_pages * sizeof(*page_array))) == NULL ) - { - IPRINTF("Could not allocate memory\n"); - goto error_out; - } - if ( xc_get_pfn_list(xc_handle, domid, page_array, nr_pages) != nr_pages ) - { - IPRINTF("Could not get the page frame list\n"); - goto error_out; - } - sts = dump_rtn(args, (char *)page_array, nr_pages * sizeof(*page_array)); - if ( sts != 0 ) - goto error_out; + goto out; + + if ( live_shinfo != NULL ) + { + /* shared_info: .Xen.shared_info */ + sts = dump_rtn(args, (char*)live_shinfo, PAGE_SIZE); + if ( sts != 0 ) + goto out; + } + + /* p2m/pfn table: .Xen.p2m/.Xen.pfn */ + if ( !auto_translated_physmap ) + sts = dump_rtn(args, (char *)p2m_array, + sizeof(p2m_array[0]) * nr_pages); + else + sts = dump_rtn(args, (char *)pfn_array, + sizeof(pfn_array[0]) * nr_pages); + if ( sts != 0 ) + goto out; /* Pad the output data to page alignment. */ memset(dummy, 0, PAGE_SIZE); - sts = dump_rtn(args, dummy, header.xch_pages_offset - dummy_len); - if ( sts != 0 ) - goto error_out; - + sts = dump_rtn(args, dummy, dummy_len); + if ( sts != 0 ) + goto out; + + /* dump pages: .Xen.pages */ for ( dump_mem = dump_mem_start, i = 0; i < nr_pages; i++ ) { - copy_from_domain_page(xc_handle, domid, page_array[i], dump_mem); + uint64_t gmfn; + if ( !auto_translated_physmap ) + gmfn = p2m_array[i].gmfn; + else + gmfn = pfn_array[i]; + + copy_from_domain_page(xc_handle, domid, gmfn, dump_mem); dump_mem += PAGE_SIZE; if ( ((i + 1) % DUMP_INCREMENT == 0) || ((i + 1) == nr_pages) ) { - sts = dump_rtn(args, dump_mem_start, dump_mem - dump_mem_start); + sts = dump_rtn(args, dump_mem_start, + dump_mem - dump_mem_start); if ( sts != 0 ) - goto error_out; + goto out; dump_mem = dump_mem_start; } } - free(dump_mem_start); - free(page_array); - return 0; - - error_out: - free(dump_mem_start); - free(page_array); - return -1; + /* elf section header string table: .shstrtab */ + sts = dump_rtn(args, strtab->strings, strtab->current); + if ( sts != 0 ) + goto out; + + sts = 0; + +out: + if ( p2m != NULL ) + munmap(p2m, PAGE_SIZE * P2M_FL_ENTRIES); + if ( p2m_array != NULL ) + free(p2m_array); + if ( pfn_array != NULL ) + free(pfn_array); + if ( sheaders != NULL ) + shdr_free(sheaders); + if ( strtab != NULL ) + strtab_free(strtab); + if ( dump_mem_start != NULL ) + free(dump_mem_start); + if ( live_shinfo != NULL ) + munmap(live_shinfo, PAGE_SIZE); + return sts; } /* Callback args for writing to a local dump file. */ diff -r 74ada22b59e8 -r 4466d95ea07b tools/libxc/xc_core.h --- /dev/null Thu Jan 01 00:00:00 1970 +0000 +++ b/tools/libxc/xc_core.h Fri Feb 02 20:21:54 2007 +0900 @@ -0,0 +1,118 @@ +/* + * Copyright (c) 2006 Isaku Yamahata + * VA Linux Systems Japan K.K. + * + * This program is free software; you can redistribute it and/or modify + * it under the terms of the GNU General Public License as published by + * the Free Software Foundation; either version 2 of the License, or + * (at your option) any later version. + * + * This program is distributed in the hope that it will be useful, + * but WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the + * GNU General Public License for more details. + * + * You should have received a copy of the GNU General Public License + * along with this program; if not, write to the Free Software + * Foundation, 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA. + * + */ + +#ifndef XC_CORE_H +#define XC_CORE_H + +#include "xen/version.h" + +/* section names */ +#define ELF_SEC_XEN_NOTE ".note.Xen" +#define ELF_SEC_XEN_SHARED_INFO ".Xen.shared_info" +#define ELF_SEC_XEN_P2M ".Xen.p2m" +#define ELF_SEC_XEN_PFN ".Xen.pfn" +#define ELF_SEC_XEN_PAGES ".Xen.pages" + +/* elf note name */ +#define XEN_ELFNOTE_NAME "Xen" +/* note numbers are defined in xen/elfnote.h */ + +#define XEN_DUMPCORE_FORMAT_MAJOR_VERSION 0 +#define XEN_DUMPCORE_FORMAT_MINOR_VERSION 0 +#define XEN_DUMPCORE_FORMAT_EXTRA_VERSION 1 + +struct xen_elfnote { + uint32_t namesz; /* Elf_Note note; */ + uint32_t descsz; + uint32_t type; + char name[4]; /* sizeof("Xen") = 4 + * Fotunately this is 64bit aligned so that + * we can use same structore for both 32/64bit + */ +}; + +struct xen_elfnote_dumpcore_none_desc { + /* nothing */ +}; + +struct xen_elfnote_dumpcore_header_desc { + uint64_t xch_magic; + uint64_t xch_nr_vcpus; + uint64_t xch_nr_pages; + uint64_t xch_page_size; +}; + +struct xen_elfnote_dumpcore_xen_version_desc { + uint64_t major_version; + uint64_t minor_version; + xen_extraversion_t extra_version; + xen_compile_info_t compile_info; + xen_changeset_info_t changeset; + uint64_t pagesize; +}; + +struct xen_elfnote_dumpcore_format_version_desc { + uint64_t major; + uint64_t minor; + uint64_t extra; +}; + + +struct xen_elfnote_dumpcore_none { + struct xen_elfnote elfnote; + struct xen_elfnote_dumpcore_none_desc none; +}; + +struct xen_elfnote_dumpcore_header { + struct xen_elfnote elfnote; + struct xen_elfnote_dumpcore_header_desc header; +}; + +struct xen_elfnote_dumpcore_xen_version { + struct xen_elfnote elfnote; + struct xen_elfnote_dumpcore_xen_version_desc xen_version; +}; + +struct xen_elfnote_dumpcore_format_version { + struct xen_elfnote elfnote; + struct xen_elfnote_dumpcore_format_version_desc format_version; +}; + +struct xen_elfnote_dumpcore_prstatus { + struct xen_elfnote elfnote; + vcpu_guest_context_t ctxt[0]; +}; + +struct p2m { + uint64_t pfn; + uint64_t gmfn; +}; + +#endif /* XC_CORE_H */ + +/* + * Local variables: + * mode: C + * c-set-style: "BSD" + * c-basic-offset: 4 + * tab-width: 4 + * indent-tabs-mode: nil + * End: + */ diff -r 74ada22b59e8 -r 4466d95ea07b tools/libxc/xc_efi.h --- /dev/null Thu Jan 01 00:00:00 1970 +0000 +++ b/tools/libxc/xc_efi.h Fri Feb 02 20:21:54 2007 +0900 @@ -0,0 +1,68 @@ +#ifndef XC_EFI_H +#define XC_EFI_H + +/* definitions from xen/include/asm-ia64/linux-xen/linux/efi.h */ + +/* + * Extensible Firmware Interface + * Based on 'Extensible Firmware Interface Specification' version 0.9, April 30, 1999 + * + * Copyright (C) 1999 VA Linux Systems + * Copyright (C) 1999 Walt Drummond + * Copyright (C) 1999, 2002-2003 Hewlett-Packard Co. + * David Mosberger-Tang + * Stephane Eranian + */ + +/* + * Memory map descriptor: + */ + +/* Memory types: */ +#define EFI_RESERVED_TYPE 0 +#define EFI_LOADER_CODE 1 +#define EFI_LOADER_DATA 2 +#define EFI_BOOT_SERVICES_CODE 3 +#define EFI_BOOT_SERVICES_DATA 4 +#define EFI_RUNTIME_SERVICES_CODE 5 +#define EFI_RUNTIME_SERVICES_DATA 6 +#define EFI_CONVENTIONAL_MEMORY 7 +#define EFI_UNUSABLE_MEMORY 8 +#define EFI_ACPI_RECLAIM_MEMORY 9 +#define EFI_ACPI_MEMORY_NVS 10 +#define EFI_MEMORY_MAPPED_IO 11 +#define EFI_MEMORY_MAPPED_IO_PORT_SPACE 12 +#define EFI_PAL_CODE 13 +#define EFI_MAX_MEMORY_TYPE 14 + +/* Attribute values: */ +#define EFI_MEMORY_UC ((uint64_t)0x0000000000000001ULL) /* uncached */ +#define EFI_MEMORY_WC ((uint64_t)0x0000000000000002ULL) /* write-coalescing */ +#define EFI_MEMORY_WT ((uint64_t)0x0000000000000004ULL) /* write-through */ +#define EFI_MEMORY_WB ((uint64_t)0x0000000000000008ULL) /* write-back */ +#define EFI_MEMORY_WP ((uint64_t)0x0000000000001000ULL) /* write-protect */ +#define EFI_MEMORY_RP ((uint64_t)0x0000000000002000ULL) /* read-protect */ +#define EFI_MEMORY_XP ((uint64_t)0x0000000000004000ULL) /* execute-protect */ +#define EFI_MEMORY_RUNTIME ((uint64_t)0x8000000000000000ULL) /* range requires runtime mapping */ +#define EFI_MEMORY_DESCRIPTOR_VERSION 1 + +#define EFI_PAGE_SHIFT 12 + +/* + * For current x86 implementations of EFI, there is + * additional padding in the mem descriptors. This is not + * the case in ia64. Need to have this fixed in the f/w. + */ +typedef struct { + uint32_t type; + uint32_t pad; + uint64_t phys_addr; + uint64_t virt_addr; + uint64_t num_pages; + uint64_t attribute; +#if defined (__i386__) + uint64_t pad1; +#endif +} efi_memory_desc_t; + +#endif /* XC_EFI_H */ diff -r 74ada22b59e8 -r 4466d95ea07b tools/libxc/xenctrl.h --- a/tools/libxc/xenctrl.h Mon Jan 29 14:07:26 2007 +0900 +++ b/tools/libxc/xenctrl.h Fri Feb 02 20:21:54 2007 +0900 @@ -556,6 +556,11 @@ unsigned long xc_translate_foreign_addre unsigned long xc_translate_foreign_address(int xc_handle, uint32_t dom, int vcpu, unsigned long long virt); + +/** + * DEPRECATED. Avoid using this, as it does not correctly account for PFNs + * without a backing MFN. + */ int xc_get_pfn_list(int xc_handle, uint32_t domid, uint64_t *pfn_buf, unsigned long max_pfns); diff -r 74ada22b59e8 -r 4466d95ea07b tools/libxc/xg_private.h --- a/tools/libxc/xg_private.h Mon Jan 29 14:07:26 2007 +0900 +++ b/tools/libxc/xg_private.h Fri Feb 02 20:21:54 2007 +0900 @@ -139,6 +139,23 @@ typedef l4_pgentry_64_t l4_pgentry_t; #define PAGE_SIZE_IA64 (1UL << PAGE_SHIFT_IA64) #define PAGE_MASK_IA64 (~(PAGE_SIZE_IA64-1)) +#define ROUNDUP(_x,_w) (((unsigned long)(_x)+(1UL<<(_w))-1) & ~((1UL<<(_w))-1)) + +/* Size in bytes of the P2M (rounded up to the nearest PAGE_SIZE bytes) */ +#define P2M_SIZE ROUNDUP((max_pfn * sizeof(xen_pfn_t)), PAGE_SHIFT) + +/* Number of xen_pfn_t in a page */ +#define fpp (PAGE_SIZE/sizeof(xen_pfn_t)) + +/* Number of entries in the pfn_to_mfn_frame_list_list */ +#define P2M_FLL_ENTRIES (((max_pfn)+(fpp*fpp)-1)/(fpp*fpp)) + +/* Number of entries in the pfn_to_mfn_frame_list */ +#define P2M_FL_ENTRIES (((max_pfn)+fpp-1)/fpp) + +/* Size in bytes of the pfn_to_mfn_frame_list */ +#define P2M_FL_SIZE ((P2M_FL_ENTRIES)*sizeof(unsigned long)) + struct domain_setup_info { uint64_t v_start; diff -r 74ada22b59e8 -r 4466d95ea07b tools/libxc/xg_save_restore.h --- a/tools/libxc/xg_save_restore.h Mon Jan 29 14:07:26 2007 +0900 +++ b/tools/libxc/xg_save_restore.h Fri Feb 02 20:21:54 2007 +0900 @@ -81,7 +81,6 @@ static inline int get_platform_info(int */ #define PFN_TO_KB(_pfn) ((_pfn) << (PAGE_SHIFT - 10)) -#define ROUNDUP(_x,_w) (((unsigned long)(_x)+(1UL<<(_w))-1) & ~((1UL<<(_w))-1)) /* @@ -94,25 +93,5 @@ static inline int get_platform_info(int #define M2P_SIZE(_m) ROUNDUP(((_m) * sizeof(xen_pfn_t)), M2P_SHIFT) #define M2P_CHUNKS(_m) (M2P_SIZE((_m)) >> M2P_SHIFT) -/* Size in bytes of the P2M (rounded up to the nearest PAGE_SIZE bytes) */ -#define P2M_SIZE ROUNDUP((max_pfn * sizeof(xen_pfn_t)), PAGE_SHIFT) - -/* Number of xen_pfn_t in a page */ -#define fpp (PAGE_SIZE/sizeof(xen_pfn_t)) - -/* Number of entries in the pfn_to_mfn_frame_list */ -#define P2M_FL_ENTRIES (((max_pfn)+fpp-1)/fpp) - -/* Size in bytes of the pfn_to_mfn_frame_list */ -#define P2M_FL_SIZE ((P2M_FL_ENTRIES)*sizeof(unsigned long)) - -/* Number of entries in the pfn_to_mfn_frame_list_list */ -#define P2M_FLL_ENTRIES (((max_pfn)+(fpp*fpp)-1)/(fpp*fpp)) - /* Returns TRUE if the PFN is currently mapped */ #define is_mapped(pfn_type) (!((pfn_type) & 0x80000000UL)) - -#define INVALID_P2M_ENTRY (~0UL) - - - diff -r 74ada22b59e8 -r 4466d95ea07b xen/include/public/elfnote.h --- a/xen/include/public/elfnote.h Mon Jan 29 14:07:26 2007 +0900 +++ b/xen/include/public/elfnote.h Fri Feb 02 20:21:54 2007 +0900 @@ -169,6 +169,49 @@ */ #define XEN_ELFNOTE_CRASH_REGS 0x1000002 + +/* + * xen dump-core none note. + * xm dump-core code will create one XEN_ELFNOTE_DUMPCORE_NONE + * in its dump file to indicate that the file is xen dump-core + * file. This notes doesn't have any other infomation. + * See tools/libxc/xc_core.h for more infomration. + */ +#define XEN_ELFNOTE_DUMPCORE_NONE 0x2000000 + +/* + * xen dump-core header note. + * xm dump-core code will create one XEN_ELFNOTE_DUMPCORE_HEADER + * in its dump file. Its contains some magic numver and small infomations. + * See tools/libxc/xc_core.h for more infomration. + */ +#define XEN_ELFNOTE_DUMPCORE_HEADER 0x2000001 + +/* + * xen dump-core xen version note. + * xm dump-core code will create one XEN_ELFNOTE_DUMPCORE_XEN_VERSION + * in its dump file. Its contains xen vesion which is gotten by + * XENVER hypercall. + * See tools/libxc/xc_core.h for more infomration. + */ +#define XEN_ELFNOTE_DUMPCORE_XEN_VERSION 0x2000002 + +/* + * xen dump-core format version note. + * xm dump-core code will create one XEN_ELFNOTE_DUMPCORE_FORMAT_VERSION + * in its dump file. Its contains format vesion. + * See tools/libxc/xc_core.h for more infomration. + */ +#define XEN_ELFNOTE_DUMPCORE_FORMAT_VERSION 0x2000003 + +/* + * xen dump-core prstatus note. + * xm dump-core code will create one XEN_ELFNOTE_DUMPCORE_PRSTATUS + * in its dump file. Its contains register infomation as vcpu_guest_context_t. + * See tools/libxc/xc_core.h for more infomration. + */ +#define XEN_ELFNOTE_DUMPCORE_PRSTATUS 0x2000004 + #endif /* __XEN_PUBLIC_ELFNOTE_H__ */ /*