The intention behind the announcement was to inform people interested in virtualization about Xvisor. The announcement was an early info about achievements of Xvisor ARM (for now compared to KVM ARM). Certainly we are planning to have scientific paper for Xvisor.
consider the members of this list informed
Also, I do agree that KVM ARM can be further optimized but as I mentioned in my previous replies "KVM ARM will end-up putting more and more stuff in-kernel". For now you can think of Xvisor ARM = KVM ARM doing everything in-kernel. Even Xvisor ARM is being optimized so, as time passes Xvisor ARM is also going to improve further. Its a common wisdom that "No hypervisor in the world can be better than Native performance". Xvisor ARM is already very very close to native performance and KVM ARM will come close to native performance only by increasing its monolithic nature (i.e. doing more things in-kernel). If monolithic hypervisors are so well performing then why not to have a monolithic hypervisor made for virtualization purpose only. The motivation behind writing Xvisor was the same.
Apart from high performance Xvisor has many interesting features such as: Ability to work without hardware virtualization support - Xvisor ARM is able to boot multiple unmodified Linux guest even on hosts which do not have virtualization extensions implemented. In contrast, KVM ARM does not work without virtualization extension. The potential number of host hardware that Xvisor ARM can support is much more than KVM ARM can support. Xvisor ARM can in-fact run on old ARMv5 processors too.
Tree-based configuration - To create a guest we have to just describe the guest in form of a device tree (possible even in runtime). In contrast, for KVM one needs to add the support in QEMU and recompile the binaries.
Pass-through hardware access - For hardware not accessed or virtualized by Xvisor can be used in pass-through mode. Providing the guests a pass-through accessible device is just matter of adding a tree node and configure irq routing information in guest tree. Its not just PCI devices, we can provide any kind of device as pass-through accessible (Note: if device has in-built DMA then it should have IOMMU or SysMMU otherwise it would be security breach). We have already tried out Serial Port and NIC as pass-through devices.
We can compare KVM advantages with Xvisor as follows: Scheduler - The linux kernel scheduler is very mature and proven OS scheduler but Hypervisor scheduler can be quite different. Scheduling processes and scheduling VMs can be very different problems. In case of VMs we can use info such as: amount of emulated IO done, amount of time spend in waiting for irq, etc for improving the quality of server consolidation.
Driver base - Xvisor has and will have all driver framework APIs similar to Linux and driver porting will be just one-on-one replacement of APIs in most cases. User space tools - For starter Xvisor will use libvirt tools (or similar open source initiative) for remote management.
Co-existence host processes - Xvisor is not an OS. Its made for virtualization only so no process. Ofcourse, Xvisor has internal threading framework but most of the time this background threads are sleeping doing nothing. All the management commands are provided by managment terminal daemon (which a background thread in Xvisor).
This all sounds fantastic!
But as I said, this list is for the development of KVM/ARM and not a place for arguing how fantastic Xvisor is as opposed to everything else. Please keep that in mind.
I am looking forward to your paper.
On Wed, May 9, 2012 at 4:29 AM, Christoffer Dall <cdall@xxxxxxxxxxxxxxx> wrote:
Anup,
Thanks for providing info on your Xvisor project. However, this is a mailing list for the development of KVM/ARM and not a scientific forum to establish in theory which hypervisor "will always perform better than" which other hypervisor.
If you want to establish that your code base will always perform better than all other hosted hypervisors, I strongly encourage you to submit a paper about this in a peer reviewed conference. Personally I would find that establishing such facts in a scientific way to be extremely interesting.
I understand that you wish to argue Xvisor's superiority in comparison to KVM, but I disagree with your conclusions. The code path taken in KVM can be optimized to be extremely short and all logic could be placed within the KVM module. There are numerous other advantages to using KVM (existing driver base, upstream kernel integration, compatibility with existing user space tools, co-existence with native host processes, etc.) and with server grade hardware I see the reliance on the Linux kernel for scheduling, memory management etc. to be a great advantage - and not a drawback. On the other hand, when Xvisor matures, feature requests will only increase its code size and complexity as well.
-Christoffer Hi PMM, Whether to consider model for measuring performance is one's own opinion. There are number of Tier1 conferences which accept simulation numbers for proving better approaches provided the simulation platform is well accepted by everyone.
Talking about code sequences both Xvisor ARM and KVM ARM have same set of emulators and drivers. In fact, almost all emulation code has been adopted from QEMU. Many of the crucial drivers are adopted from Linux ARM. Unlike KVM ARM, in Xvisor ARM there no unnecessary switching between host mode to guest mode and amount of code traversed in handling any fault is also very less hence Xvisor-ARM will have much less code executed compared to KVM ARM.
In Xvisor developement, we have observed that results of any CPU performance test on QEMU or Fast Model naturally scales up on real-hardware. Atleast we have never come across any scenario or test performing better on QEMU or Fast Model compared real-world (this is true for test running on Native Linux or Linux running as guest on Xvisor ARM).
In our opinion we strongly believe monolithic approaches are always better performing over micro-kernelized approaches (or approaches somewhere in between micro-kernel and monolithic). Hence Xvisor ARM will always perform better than KVM ARM in theory, simulation and real-world.
Best Regards, Anup Patel On Sun, May 6, 2012 at 2:21 PM, Peter Maydell <peter.maydell@xxxxxxxxxx> wrote:
On 6 May 2012 05:22, Anup Patel < anup@xxxxxxxxxxxxxx> wrote:
> Also can you give example of a code sequence which is faster on model and
> slower in real world. As far as I know ARM fast models are internally TLM
> based models and If a TLM based model is emulating a timer chip of X clock
> then it is quite precise X clock.
Support for TLM does not require that the underlying model is cycle
accurate (you can have 'loosely timed' behaviour).
You might want to read the Fast Models documentation, which tries
to be clear about what the models do and don't provide. In particular:
http://infocenter.arm.com/help/topic/com.arm.doc.dui0423l/ch02s01s02.html
"Fast models cannot be used to:
* model cycle counting
* model software performance
"
> Ofcourse CPU emulation and computation
> power will be less compared to real world. To see this behaviour try to boot
> linux on Fast model or QEMU and leave it for hours and come back see the
> time elapsed, you will definitely see same amount of time elapsed as real
> world.
Nobody's arguing that the models are faster than hardware!
Let's try a simple example with some numbers representing
relative speeds:
operation A: h/w: 1 ; model 5
operation B: h/w 3 ; model 30
Where we're comparing two equivalent code sequences "A A A A" vs "B".
On hardware "B" will be faster. On the model the "A A A A" beats "B".
(both sequences are slower on the model than on the hardware, obviously.)
The point is that some operations will be vastly vastly slower
on the model, and some operations merely moderately slower. Which
of any two code sequences is fastest depends at least as much on
whether it's using operations that are disproportionally worse
on the model. A trivial example of this is VFP -- certainly QEMU
has to do complex software emulation of the floating point ops to
maintain bit-for-bit accuracy, which makes them very slow to the
point where a hand-optimised-integer-assembly codec is likely to
be faster on the model than a Neon/VFP-using codec, even though
of course the Neon codec will be faster on hardware.
[NB: this is itself a big simplification: model performance will
depend on a lot of interacting things and is not purely a
same-every-time slowdown per operation. Some operations effectively
slow down what happens after them, for instance on QEMU if you do
something that makes us flush our cache of translated code. And
if for instance you have a periodic timer then the fact the model
is generally slower means you execute proportionally more insns in
the timer interrupt, so inefficiency or slowness in that code path
has disproportionately more effect on overall speed than it does
on hardware. There are other complications too...]
> The results in the announcemnt are not baseless we have quite amount reasons
> to believe Xvisor ARM will perform better than KVM ARM in real-world too.
I'm not stating a position on whether KVM will be better or worse
than Xvisor. I'm just pointing out that you can't base an argument
on the faulty assumption that performance inside a model can tell
you anything useful about performance on hardware.
-- PMM
|