[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-devel] Re: [PATCH] libxl: basic support for virtio disk
On Wed, Jun 1, 2011 at 9:59 PM, Ian Jackson <Ian.Jackson@xxxxxxxxxxxxx> wrote: > Wei Liu writes ("[Xen-devel] Re: [PATCH] libxl: basic support for virtio > disk"): >> Revised patch. >> >> Add code in libxl__device_disk_string_of_backend. >> >> Upper limit of virtio disk follows scsi. > > I'm not sure what you mean here. ÂDo you mean that Linux only supports > as many virtio disks as it supports scsi disks ? ÂIs this a > fundamental limitation of the virtio protocol ? > I asked about the limitation in qemu-devel, the answer is "virtio-blk as used by KVM is exposed as a virtio PCI adapter. There is a 1:1 mapping between virtio-blk, PCI adapters, and block devices being presented by QEMU: 1 virtio-blk device in guest == 1 virtio-pci adapter in guest == 1 block device in QEMU The maximum number is really limited by the PCI bus, not virtio. In terms of coding, you should try not to impose a hard limit at all." Thus Stefano suggest I use the same limitation as SCSI disk. >> + Â Â Â Â Â Â Â Âelse if (strncmp(disks[i].vdev, "vd", 2) == 0) >> + Â Â Â Â Â Â Â Â Â Âdrive = libxl__sprintf >> + Â Â Â Â Â Â Â Â Â Â Â Â(gc, >> "file=%s,if=virtio,index=%d,media=disk,format=%s", >> + Â Â Â Â Â Â Â Â Â Â Â Â disks[i].pdev_path, disk, format); > > Maybe I'm missing something but this seems not to use the partition > number at all ? > Also answered in qemu-devel "Partitions are not at the virtio-blk level. The guest operating system will see the virtio-blk disk and scan its partition table to determine which partitions are available. The limit then depends on the partitioning scheme that you use (legacy boot record, GPT, etc)." So I'm not using the partition number here. In fact, we should not support vda1, vda2, right? The discussion thread is at http://marc.info/?l=qemu-devel&m=130689044627041&w=2 > The existing code seems rather broken TBH. > Hmm... I'm not catching up with libxl. Should be more careful next time... > Ian. > Wei. _______________________________________________ Xen-devel mailing list Xen-devel@xxxxxxxxxxxxxxxxxxx http://lists.xensource.com/xen-devel
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |