[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
Re: [Xen-users] [win-pv-devel] Windows PV drivers with 4K sector size
Mike,
Yes, blkback is definitely misleading the frontend. From my reading of the
blkback code though, it appears to be using the
get_capacity() inline to get the number of sectors from the disk so I suspect it’s actually getting the number of logical sectors but then setting that in
xenstore along with the actual physical sector size, which it gets straight from the block device. Unfortunately I think this is a long-standing bug, but it’s worth reporting to the maintainers.
Cheers,
Paul
From:
Mike Reardon [mailto:mule@xxxxxxxx]
Sent: 30 October 2017 17:23
To: Paul Durrant <Paul.Durrant@xxxxxxxxxx>
Cc: xen-users@xxxxxxxxxxxxxxxxxxxx; win-pv-devel@xxxxxxxxxxxxxxxxxxxx
Subject: Re: [win-pv-devel] Windows PV drivers with 4K sector size
I suspect its probably blkback.. I haven't passed anything on to the guest to specify so it would be whichever the default is (the disk line
from the config reads as 'phy:/dev/tv_storage/main-storage,xvda,w'.)
I think I've come across the problem though in looking at the values from xenstore, though I'm not sure as to why this is happening. Here
is the section from xenstore-ls for the relevant LV:
51712 = ""
frontend = "/local/domain/39/device/vbd/51712"
params = "/dev/tv_storage/main-storage"
script = "/etc/xen/scripts/block"
frontend-id = "39"
_online_ = "1"
removable = "0"
bootable = "1"
state = "2"
dev = "xvda"
type = "phy"
mode = "w"
device-type = "disk"
discard-enable = "1"
feature-max-indirect-segments = "256"
multi-queue-max-queues = "12"
max-ring-page-order = "4"
physical-device = "fe:0"
physical-device-path = "/dev/dm-0"
hotplug-status = "connected"
feature-flush-cache = "1"
feature-discard = "0"
feature-barrier = "1"
feature-persistent = "1"
sectors = "34359738368"
info = "0"
sector-size = "4096"
physical-sector-size = "4096"
The number of sectors seemed a bit high so I checked it against fdisk:
Disk /dev/tv_storage/main-storage: 16 TiB, 17592186044416 bytes, 4294967296 sectors
Units: sectors of 1 * 4096 = 4096 bytes
Sector size (logical/physical): 4096 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes
Disklabel type: dos
Disk identifier: 0x00000000
Device Boot Start End Sectors Size Id Type
/dev/tv_storage/main-storage1 1 4294967295 4294967295 16T ee GPT
So it looks like Xen is getting the sector size correct and its actually the number of sectors reported from xenstore that is 8x higher than
it should be.
Mike
On Mon, Oct 30, 2017 at 4:47 AM, Paul Durrant <Paul.Durrant@xxxxxxxxxx> wrote:
Hi,
What backend are you using?
Blkback or QEMU
qdisk? I believe
blkback may have errors in some of its calculated sizes if you use a block size other than 512 bytes. In the Windows PV frontend the driver gets both the sector size and the number of
sectors from
xenstore, so if the backend reports them correctly then you *should* see a disk of the correct size in the frontend. Could you check what values are being set in
xenstore?
Cheers,
Paul
I added some new 4Kn drives to one of my servers but seem to be having some trouble getting a Windows VM to work with
the drives. Originally I had just assigned a new logical volume to the existing guest, but Windows reported the disk as being 8x larger than it was, and any attempts to partition it would just throw back IO errors. Hoping it was just some limitation of seabios,
I created a new VM using ovmf and the disk detected fine and the install went without issue. When I then attempted to install the PV drivers however, the system would no longer boot, and would throw back Inaccessible Boot Device errors, so I'm guessing my
problem in the original guest was the drivers rather than that bios.
So I guess what I'm getting it as I'm trying to find out if there is a way to make 4K sector size work for Windows
guests using PV drivers. I'd hate to have the run the server without using the PV drivers for obvious performance issues.
Thanks for any insight anyone may have.
|
_______________________________________________
Xen-users mailing list
Xen-users@xxxxxxxxxxxxx
https://lists.xen.org/xen-users
|