[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[Xen-devel] [BUG] blkback reporting incorrect number of sectors, unable to boot


  • To: xen-devel@xxxxxxxxxxxxx
  • From: Mike Reardon <mule@xxxxxxxx>
  • Date: Fri, 3 Nov 2017 22:48:02 -0600
  • Delivery-date: Sat, 04 Nov 2017 05:31:06 +0000
  • List-id: Xen developer discussion <xen-devel.lists.xen.org>

Hello,

I had originally posted about this issue to win-pv-devel but it was suggested this is actually an issue in blkback.

I added some additional storage to my server with some native 4k sector size disks.  The LVM volumes on that array seem to work fine when mounted by the host, and when passed through to any of the Linux guests, but Windows guests aren't able to use them when using PV drivers.  The work fine to install when I first install Windows (Windows 10, latest build) but once I install the PV drivers it will no longer boot and give an inaccessible boot device error.  If I assign the storage to a different Windows guest that already has the drivers installed (as secondary storage, not as the boot device) I see the disk listed in disk management, but the size of the disk is 8x larger than it should be.  After looking into it a bit, the disk is reporting 8x the number of sectors it should have when I run xenstore-ls.  Here is the info from xenstore-ls for the relevant volume:

      51712 = ""
       frontend = "/local/domain/8/device/vbd/51712"
       params = "/dev/tv_storage/main-storage"
       script = "/etc/xen/scripts/block"
       frontend-id = "8"
       _online_ = "1"
       removable = "0"
       bootable = "1"
       state = "2"
       dev = "xvda"
       type = "phy"
       mode = "w"
       device-type = "disk"
       discard-enable = "1"
       feature-max-indirect-segments = "256"
       multi-queue-max-queues = "12"
       max-ring-page-order = "4"
       physical-device = "fe:0"
       physical-device-path = "/dev/dm-0"
       hotplug-status = "connected"
       feature-flush-cache = "1"
       feature-discard = "0"
       feature-barrier = "1"
       feature-persistent = "1"
       sectors = "34359738368"
       info = "0"
       sector-size = "4096"
       physical-sector-size = "4096"


Here are the numbers for the volume as reported by fdisk:

Disk /dev/tv_storage/main-storage: 16 TiB, 17592186044416 bytes, 4294967296 sectors
Units: sectors of 1 * 4096 = 4096 bytes
Sector size (logical/physical): 4096 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes
Disklabel type: dos
Disk identifier: 0x00000000

Device                        Boot Start        End    Sectors Size Id Type
/dev/tv_storage/main-storage1          1 4294967295 4294967295  16T ee GPT


As with the size reported in Windows disk management, the number of sectors from xenstore seems is 8x higher than what it should be.  The disks aren't using 512b sector emulation, they are natively 4k, so I have no idea where the 8x increase is coming from.


Here is some additional info from the system:

Xen version is 4.10.0-rc3

xl info:
host                   : localhost
release                : 4.13.3-gentoo
version                : #1 SMP Sat Sep 23 00:48:14 MDT 2017
machine                : x86_64
nr_cpus                : 12
max_cpu_id             : 11
nr_nodes               : 1
cores_per_socket       : 6
threads_per_core       : 2
cpu_mhz                : 3200
hw_caps                : bfebfbff:17bee3ff:2c100800:00000001:00000001:00000000:00000000:00000100
virt_caps              : hvm hvm_directio
total_memory           : 65486
free_memory            : 27511
sharing_freed_memory   : 0
sharing_used_memory    : 0
outstanding_claims     : 0
free_cpus              : 0
xen_major              : 4
xen_minor              : 10
xen_extra              : .0-rc
xen_version            : 4.10.0-rc
xen_caps               : xen-3.0-x86_64 xen-3.0-x86_32p hvm-3.0-x86_32 hvm-3.0-x86_32p hvm-3.0-x86_64
xen_scheduler          : credit
xen_pagesize           : 4096
platform_params        : virt_start=0xffff800000000000
xen_changeset          :
xen_commandline        : placeholder dom0_mem=4G,max:4G
cc_compiler            : x86_64-pc-linux-gnu-gcc (Gentoo 5.4.0-r3 p1.3, pie-0.6.5) 5.4.0
cc_compile_by          :
cc_compile_domain      : localdomain
cc_compile_date        : Fri Nov  3 17:56:23 MDT 2017
build_id               : 518460cc025ca13ae79e3b971cfa0df2b1285323
xend_config_format     : 4



xl -v create:
libxl: detail: libxl_dom.c:264:hvm_set_viridian_features: base group enabled
libxl: detail: libxl_dom.c:264:hvm_set_viridian_features: freq group enabled
libxl: detail: libxl_dom.c:264:hvm_set_viridian_features: time_ref_count group enabled
libxl: detail: libxl_dom.c:264:hvm_set_viridian_features: apic_assist group enabled
libxl: detail: libxl_dom.c:264:hvm_set_viridian_features: crash_ctl group enabled
domainbuilder: detail: xc_dom_allocate: cmdline="", features=""
domainbuilder: detail: xc_dom_kernel_file: filename="/usr/libexec/xen/boot/hvmloader"
domainbuilder: detail: xc_dom_malloc_filemap    : 208 kB
domainbuilder: detail: xc_dom_boot_xen_init: ver 4.10, caps xen-3.0-x86_64 xen-3.0-x86_32p hvm-3.0-x86_32 hvm-3.0-x86_32p hvm-3.0-x86_64
domainbuilder: detail: xc_dom_parse_image: called
domainbuilder: detail: xc_dom_find_loader: trying multiboot-binary loader ...
domainbuilder: detail: loader probe failed
domainbuilder: detail: xc_dom_find_loader: trying HVM-generic loader ...
domainbuilder: detail: loader probe OK
xc: detail: ELF: phdr: paddr=0x100000 memsz=0x3dc64
xc: detail: ELF: memory: 0x100000 -> 0x13dc64
domainbuilder: detail: xc_dom_mem_init: mem 4080 MB, pages 0xff000 pages, 4k each
domainbuilder: detail: xc_dom_mem_init: 0xff000 pages
domainbuilder: detail: xc_dom_boot_mem_init: called
domainbuilder: detail: xc_dom_malloc            : 8672 kB
xc: detail: PHYSICAL MEMORY ALLOCATION:
xc: detail:   4KB PAGES: 0x0000000000000200
xc: detail:   2MB PAGES: 0x00000000000003f7
xc: detail:   1GB PAGES: 0x0000000000000002
domainbuilder: detail: xc_dom_build_image: called
domainbuilder: detail: xc_dom_pfn_to_ptr_retcount: domU mapping: pfn 0x100+0x3e at 0x7f4882ac9000
domainbuilder: detail: xc_dom_alloc_segment:   kernel       : 0x100000 -> 0x13e000  (pfn 0x100 + 0x3e pages)
xc: detail: ELF: phdr 0 at 0x7f4882a8b000 -> 0x7f4882abf1c8
domainbuilder: detail: xc_dom_pfn_to_ptr_retcount: domU mapping: pfn 0x13e+0x200 at 0x7f487eeda000
domainbuilder: detail: xc_dom_alloc_segment:   System Firmware module : 0x13e000 -> 0x33e000  (pfn 0x13e + 0x200 pages)
domainbuilder: detail: xc_dom_pfn_to_ptr_retcount: domU mapping: pfn 0x33e+0x1 at 0x7f4882b4c000
domainbuilder: detail: xc_dom_alloc_segment:   HVM start info : 0x33e000 -> 0x33f000  (pfn 0x33e + 0x1 pages)
domainbuilder: detail: alloc_pgtables_hvm: doing nothing
domainbuilder: detail: xc_dom_build_image  : virt_alloc_end : 0x33f000
domainbuilder: detail: xc_dom_build_image  : virt_pgtab_end : 0x0
domainbuilder: detail: xc_dom_boot_image: called
domainbuilder: detail: xc_dom_compat_check: supported guest type: xen-3.0-x86_64
domainbuilder: detail: xc_dom_compat_check: supported guest type: xen-3.0-x86_32p
domainbuilder: detail: xc_dom_compat_check: supported guest type: hvm-3.0-x86_32 <= matches
domainbuilder: detail: xc_dom_compat_check: supported guest type: hvm-3.0-x86_32p
domainbuilder: detail: xc_dom_compat_check: supported guest type: hvm-3.0-x86_64
domainbuilder: detail: clear_page: pfn 0xfefff, mfn 0xfefff
domainbuilder: detail: clear_page: pfn 0xfeffc, mfn 0xfeffc
domainbuilder: detail: domain builder memory footprint
domainbuilder: detail:    allocated
domainbuilder: detail:       malloc             : 8688 kB
domainbuilder: detail:       anon mmap          : 0 bytes
domainbuilder: detail:    mapped
domainbuilder: detail:       file mmap          : 208 kB
domainbuilder: detail:       domU mmap          : 2300 kB
domainbuilder: detail: vcpu_hvm: called
domainbuilder: detail: xc_dom_gnttab_hvm_seed: called, pfn=0x10f000
domainbuilder: detail: xc_dom_gnttab_hvm_seed: called, pfn=0x10f001
domainbuilder: detail: xc_dom_release: called


Thank you!



_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
https://lists.xen.org/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.