[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-devel] Greater than 16 xvd devices for blkfront
On Tue, May 06, 2008 at 01:36:05PM -0400, Chris Lalancette wrote: > All, > We've had a number of requests to increase the number of xvd devices > that a > PV guest can have. Currently, if you try to connect > 16 disks, you get an > error from xend. The problem ends up being that both xend and blkfront assume > that for dev_t, major/minor is 8 bits each, where in fact there are actually > 10 > bits for major and 22 bits for minor. > Therefore, it shouldn't really be a problem giving lots of disks to > guests. > The problem is in backwards compatibility, and the details. What I am > initially proposing to do is to leave things where they are for /dev/xvd[a-p]; > that is, still put the xenstore entries in the same place, and use 8 bits for > the major and 8 bits for the minor. For anything above that, we would end up > putting the xenstore entry in a different place, and pushing the major into > the > top 10 bits (leaving the bottom 22 bits for the minor); that way old guests > won't fire when the entry is added, and we will add code to newer guests > blkfront so that they will fire when they see that entry. Does anyone see any > problems with this setup, or have any ideas how to do it better? Putting the xenstore entries in a different place is a non-starter. Too many things look at that location already. When blktap was added and it put xenstore entries in a different place it took months to track down all the bugs this caused. Dan. -- |: Red Hat, Engineering, Boston -o- http://people.redhat.com/berrange/ :| |: http://libvirt.org -o- http://virt-manager.org -o- http://ovirt.org :| |: http://autobuild.org -o- http://search.cpan.org/~danberr/ :| |: GnuPG: 7D3B9505 -o- F3C9 553F A1DA 4AC2 5648 23C1 B3DF F742 7D3B 9505 :| _______________________________________________ Xen-devel mailing list Xen-devel@xxxxxxxxxxxxxxxxxxx http://lists.xensource.com/xen-devel
|
![]() |
Lists.xenproject.org is hosted with RackSpace, monitoring our |