[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [Question] PARSEC benchmark has smaller execution time in VM than in native?



> > Hey!
> >
> > CC-ing Elena.
> 
> I think you forgot you cc.ed her..
> Anyway, let's cc. her now... :-)
> 
> >
> >> We are measuring the execution time between native machine environment
> >> and xen virtualization environment using PARSEC Benchmark [1].
> >>
> >> In virtualiztion environment, we run a domU with three VCPUs, each of
> >> them pinned to a core; we pin the dom0 to another core that is not
> >> used by the domU.
> >>
> >> Inside the Linux in domU in virtualization environment and in native
> >> environment,  We used the cpuset to isolate a core (or VCPU) for the
> >> system processors and to isolate a core for the benchmark processes.
> >> We also configured the Linux boot command line with isocpus= option to
> >> isolate the core for benchmark from other unnecessary processes.
> >
> > You may want to just offline them and also boot the machine with NUMA
> > disabled.
> 
> Right, the machine is booted up with NUMA disabled.
> We will offline the unnecessary cores then.
> 
> >
> >>
> >> We expect that execution time of benchmarks in xen virtualization
> >> environment is larger than the execution time in native machine
> >> environment. However, the evaluation gave us an opposite result.
> >>
> >> Below is the evaluation data for the canneal and streamcluster benchmarks:
> >>
> >> Benchmark: canneal, input=simlarge, conf=gcc-serial
> >> Native: 6.387s
> >> Virtualization: 5.890s
> >>
> >> Benchmark: streamcluster, input=simlarge, conf=gcc-serial
> >> Native: 5.276s
> >> Virtualization: 5.240s
> >>
> >> Is there anything wrong with our evaluation that lead to the abnormal
> >> performance results?
> >
> > Nothing is wrong. Virtualization is naturally faster than baremetal!
> >
> > :-)
> >
> > No clue sadly.
> 
> Ah-ha. This is really surprising to me.... Why will it speed up the
> system by adding one more layer? Unless the virtualization disabled
> some services that occur in native and interfere with the benchmark.
> 
> If virtualization is faster than baremetal by nature, why we can see
> that some experiment shows that virtualization introduces overhead?

Elena told me that there were some weird regression in Linux 4.1 - where
CPU burning workloads were _slower_ on baremetal than as guests.

Updating to a later kernel fixed that  -where one could see that
baremetal was faster (or on par) with the guest.
> 
> For example, VMware did some evaluation at [1]. Fig. 3 on page 9 shows
> that the virtualization (both vmware ESX301 and xen) introduces
> overhead and the benchmark is slower in virtualization than in native.
> 
>  [1] https://www.vmware.com/pdf/hypervisor_performance.pdf
> 
> It seems to me that the performance data may be tweaked (kind of
> cooked up) to some extent when people are comparing different
> hypervisors. we just need to configure the system in a specific way to
> favor one type of hypervisor than the other.

> 
> Meng

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.