[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] blktap2 device creation failing after 162 devices w/Xen4.0 + linux-2.6.31.13



On Wed, 2010-04-14 at 01:40 -0400, John McCullough wrote:
> Daniel,
> 
> That did the trick and got us up to 256, Thanks!
> 
> Out of curiosity, what's standing in the way of more devices?

I must admit I never tried. Lack of maybe a couple sparse tables here
and there?

> We tried raising the MAX_*_DEVICES constants in these files to 512, but
> didn't have any luck:
> linux-2.6-pvops.git/drivers/xen/blktap/blktap.h
> tools/blktap2/include/blktaplib.h
> tools/blktap/lib/blktaplib.h
> 
> (The error is now "vbd open failed: -6")

That would be an ENXIO, probably while trying to open the ring (can you
verify that with an strace -f?)

Should indeed work for up to 2^20, or MAX_BLKTAP_DEVICEs.

I don't think this fails in the ring code, we return ENODEV there (which
is a bug) and the kernel won't correct that code.

Daniel

> I noticed an artificial limit of 26*26 in the tapdev naming scheme, but
> I didn't look very thoroughly.
> 
> Thanks again,
> John
> 
> 
> Daniel Stodden wrote:
> > Hi.
> >
> > Please echo $((N * (32 * 11 + 50) + SOME_HEADROOM))
> > to /proc/sys/fs/aio-max-nr. Or set it up in sysctl.conf.
> >
> > Where N is the number of devices you desire.
> >
> > As for the apparently missing big fat complaint you should have seen pop
> > in syslog, I'll keep it in mind  for the next update. :}
> >
> > Cheers,
> > Daniel
> >
> > On Tue, 2010-04-13 at 20:20 -0400, John McCullough wrote:
> >   
> >> I have been working with a colleague to get a large number of small VMs 
> >> running on a single system.  We were hoping for at least 100, but we 
> >> seem to be topping out around 81.  Each VM has a disk image and a swap 
> >> image.  It seemed like we were hitting a blktap limit, so we tried 
> >> bumping up the MAX macros in tool/blktap2 and the linux driver, with no 
> >> change.  (Though we haven't hit the theoretical 256 blktap devices yet).
> >>
> >> (Initially we were only able to get 64 VMs until we bumped 
> >> CONFIG_NR_CPUS from 8 to 64 to increase the number of dynirqs).
> >>
> >> To isolate the problem, I tried creating a large number of blktap 
> >> devices in the dom0 with no guests running and I ran into the same 
> >> ceiling (162 total devices).   Commands to reproduce the problem follow:
> >>
> >> echo 9 > /sys/class/blktap2/verbosity
> >>
> >> for x in `seq 0 163`; do
> >>          if ( ! dd if=/dev/zero of=/scratch/test-$x.img bs=1 count=1 
> >> seek=1M 2> /dev/null); then
> >>                  echo "Qemu fail on $x"; exit 1
> >>          fi
> >>          if ( ! tapdisk2 -n aio:/scratch/test-$x.img) ; then
> >>                  echo "blktap fail on $x"; exit 1
> >>          fi
> >> done
> >>
> >> The result:
> >> ...
> >> /dev/xen/blktap-2/tapdev159
> >> /dev/xen/blktap-2/tapdev160
> >> /dev/xen/blktap-2/tapdev161
> >> /dev/xen/blktap-2/tapdev162
> >> unrecognized child response
> >> blktap fail on 163
> >>
> >> Dmesg output associated with 163:
> >> [ 1288.839978] blktap_sysfs_create: adding attributes for dev 
> >> ffff88019e4d1e00
> >> [ 1288.840947] blktap_sysfs_destroy
> >>
> >> (Output for the prior devices includes processing a request, and a 
> >> blktap_device_finish_request)
> >>
> >> No related xm dmesg output.
> >>
> >> $ hg tip
> >> changeset:   21091:f28f1ee587c8
> >> tag:         tip
> >> user:        Keir Fraser <keir.fraser@xxxxxxxxxx>
> >> date:        Wed Apr 07 12:38:28 2010 +0100
> >> summary:     Added signature for changeset 484179b2be5d
> >>
> >> $ uname -a
> >> Linux sysnet121 2.6.32-3-amd64 #1 SMP Wed Feb 24 18:07:42 UTC 2010 
> >> x86_64 GNU/Linux
> >>
> >> Has anyone had contrary experience? Does anyone know where the 162 max 
> >> is coming from?
> >>
> >> Thanks,
> >> John
> >>
> >> _______________________________________________
> >> Xen-devel mailing list
> >> Xen-devel@xxxxxxxxxxxxxxxxxxx
> >> http://lists.xensource.com/xen-devel
> >>     
> >
> >
> >
> > _______________________________________________
> > Xen-devel mailing list
> > Xen-devel@xxxxxxxxxxxxxxxxxxx
> > http://lists.xensource.com/xen-devel
> >   
> 
> 
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@xxxxxxxxxxxxxxxxxxx
> http://lists.xensource.com/xen-devel



_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.