[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-users] Spectacularly disappointing disk throughput



On Fri, Feb 3, 2012 at 2:28 PM, Florian Heigl <florian.heigl@xxxxxxxxx> wrote:
> yes the PV disk and network drivers make the difference between heaven
> and hell for FreeBSD domUs.
> Do not bother with any more benchmarks until you have them working :)

> But yes, in general it's slower; I also kept using FreeNAS instead of
> Nexenta due to the smaller footprint and, my, I loved the clean UI,
> until they redid it for (as the log said "adding more round edges")

Hi Florian,

I believe that I've gotten the PV drivers working at least partly
correctly. The exact procedure I used for the kernel build is
documented here:
http://files.fragmentationneeded.net/freebsd/build_kernel.txt

I dropped the new kernel in place of the old one in the FreeNAS
/boot/kernel directory, and use the same XEN guest configuration:
http://files.fragmentationneeded.net/freebsd/freenas-hvm.cfg

I see xen-related lingo as the system boots:
http://files.fragmentationneeded.net/freebsd/dmesg.txt

The network interface now appears as "xn0", and the disks have moved
around from ada* to ad* names.

Running the same 'dd' tests as before, I find that the boot device
(ad0/xbd0 virual block device noted by dmesg) has much improved
performance: 60MB/s, but the drives attached to the PCI-passthru SATA
controller remain exactly where they were: 0.5MB/s

It seems the the PV block driver hasn't noticed the disks attached to
the passed-through controller.

What do you think? Should I even expect the PV drivers to help in this
PCI passthrough scenario?

Thank you!

_______________________________________________
Xen-users mailing list
Xen-users@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-users


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.