[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [BUG] blkback reporting incorrect number of sectors, unable to boot



On Mon, Nov 06, 2017 at 05:33:37AM -0700, Jan Beulich wrote:
> >>> On 04.11.17 at 05:48, <mule@xxxxxxxx> wrote:
> > I added some additional storage to my server with some native 4k sector
> > size disks.  The LVM volumes on that array seem to work fine when mounted
> > by the host, and when passed through to any of the Linux guests, but
> > Windows guests aren't able to use them when using PV drivers.  The work
> > fine to install when I first install Windows (Windows 10, latest build) but
> > once I install the PV drivers it will no longer boot and give an
> > inaccessible boot device error.  If I assign the storage to a different
> > Windows guest that already has the drivers installed (as secondary storage,
> > not as the boot device) I see the disk listed in disk management, but the
> > size of the disk is 8x larger than it should be.  After looking into it a
> > bit, the disk is reporting 8x the number of sectors it should have when I
> > run xenstore-ls.  Here is the info from xenstore-ls for the relevant volume:
> > 
> >       51712 = ""
> >        frontend = "/local/domain/8/device/vbd/51712"
> >        params = "/dev/tv_storage/main-storage"
> >        script = "/etc/xen/scripts/block"
> >        frontend-id = "8"
> >        online = "1"
> >        removable = "0"
> >        bootable = "1"
> >        state = "2"
> >        dev = "xvda"
> >        type = "phy"
> >        mode = "w"
> >        device-type = "disk"
> >        discard-enable = "1"
> >        feature-max-indirect-segments = "256"
> >        multi-queue-max-queues = "12"
> >        max-ring-page-order = "4"
> >        physical-device = "fe:0"
> >        physical-device-path = "/dev/dm-0"
> >        hotplug-status = "connected"
> >        feature-flush-cache = "1"
> >        feature-discard = "0"
> >        feature-barrier = "1"
> >        feature-persistent = "1"
> >        sectors = "34359738368"
> >        info = "0"
> >        sector-size = "4096"
> >        physical-sector-size = "4096"
> > 
> > 
> > Here are the numbers for the volume as reported by fdisk:
> > 
> > Disk /dev/tv_storage/main-storage: 16 TiB, 17592186044416 bytes, 4294967296
> > sectors
> > Units: sectors of 1 * 4096 = 4096 bytes
> > Sector size (logical/physical): 4096 bytes / 4096 bytes
> > I/O size (minimum/optimal): 4096 bytes / 4096 bytes
> > Disklabel type: dos
> > Disk identifier: 0x00000000
> > 
> > Device                        Boot Start        End    Sectors Size Id Type
> > /dev/tv_storage/main-storage1          1 4294967295 4294967295  16T ee GPT
> > 
> > 
> > As with the size reported in Windows disk management, the number of sectors
> > from xenstore seems is 8x higher than what it should be.  The disks aren't
> > using 512b sector emulation, they are natively 4k, so I have no idea where
> > the 8x increase is coming from.
> 
> Hmm, looks like a backend problem indeed: struct hd_struct's
> nr_sects (which get_capacity() returns) looks to be in 512-byte
> units, regardless of actual sector size. Hence the plain
> get_capacity() use as well the (wrongly open coded) use of
> part_nr_sects_read() looks insufficient in vbd_sz(). Roger,
> Konrad?

Hm, AFAICT sector-size should always be set to 512.

> Question of course is whether the Linux frontend then
> also needs adjustment, and hence whether the backend can
> be corrected in a compatible way in the first place.

blkfront uses set_capacity, which also seems to expect the sectors to
be hardcoded to 512.

Roger.

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
https://lists.xen.org/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.