[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

RE: [Xen-users] Debian/Xen usage summary


  • To: "Larry A Weidig" <lweidig@xxxxxxxxx>, xen-users@xxxxxxxxxxxxxxxxxxx
  • From: "Petersson, Mats" <Mats.Petersson@xxxxxxx>
  • Date: Fri, 27 Apr 2007 16:41:42 +0200
  • Delivery-date: Fri, 27 Apr 2007 07:41:34 -0700
  • List-id: Xen user discussion <xen-users.lists.xensource.com>
  • Thread-index: AceI0864OiuDgwVfQ+64Sy3PcA1ARAAA0GTgAABPpLA=
  • Thread-topic: [Xen-users] Debian/Xen usage summary

 

> -----Original Message-----
> From: xen-users-bounces@xxxxxxxxxxxxxxxxxxx 
> [mailto:xen-users-bounces@xxxxxxxxxxxxxxxxxxx] On Behalf Of 
> Larry A Weidig
> Sent: 27 April 2007 15:27
> To: xen-users@xxxxxxxxxxxxxxxxxxx
> Subject: RE: [Xen-users] Debian/Xen usage summary
> 
> Didier:
>       I am using Fedora Core 6 (2.6.18 kernl, 2.6.20 buggy) with Xen
> 3.0.3 and have a x86_64 Dom0.  Currently we are only running fully
> virtualized guests so I cannot comment on the paravirtualization.  We
> have run into both the same issues that you are experiencing.  Only 32
> bit guests (which in our case is fine) and only type=ioemu for the
> network card.  We have found network performance to be on the 
> low end as
> well in these guests and have yet to find a solution.  
>       One thing I noticed staying with the network issue is that while
> Dom0 is a Gigabit Ethernet device and recognized properly, the DomU
> machines are only being setup as 100 Mbit Ethernet devices.  
> Is there a
> way to get a gigabit card in the DomU that anybody knows for an HVM
> guest?  What about any tricks for optimizing performance? 

Since there is no "real wire" involved in the transfer across the
network within the machine (that is, before the network hits Dom0),
there's no real meaning to the {10M, 100M, 1G}bps used for wired
networks, because it's using a virtual network device anyways, not a
REAL network device. So, assuming your machine is capable, you could get
1000Gbps on the network between DomU and Dom0 even if you have a 10Mbps
network device! [Although currently, I'd say you'll probably struggle to
get much more than 10Mbps out of the virtualized network adapter]

The slowness involved in the network traffic is all to do with the fact
that there is a SIGNIFICANT overhead in the way that the virtual device
modeling is done. First of all, whenever a hardware access is done, the
guest will "exit" back to the hypervisor, which will forward the access
to Dom0's "qemu-dm", where the hardware access is then interpreted and
"performed". Network devices are actually much better here than for
example IDE-disk accesses, as most network devices are fairly low number
of accesses per packet. But there's still some communication per packet
needed. One of the majror causes of delay is Dom0 being busy. If you can
make sure that Dom0 isn't doing anything else (and runs on it's own
core(s) that aren't being used by any other domain). 

But the biggest gain would be to completely skip the virtualized network
adapter and use a para-virtual driver. This is a driver that is aware
that the network device isn't a "real" device, but rather just packets
up the network packet and forwards it directly to Dom0. It's still a
little bit of overhead (particularly if Dom0 is busy doing "other
things") compared to a real machine with it's direct access to a network
card, but the overhead is significantly lower as there's only ONE
interaction with Dom0 per packet. PV drivers are available for Network
and Disk. Linux PV drivers are included in the Xen source code (at least
in the CURRENT code - not sure when they got included, 3.0.3 or 3.0.4),
and Windows ones are available in some distributions that contain Xen,
for example XenExpress (although using those drivers for a different
version of Xen than the one used in XenExpress may cause problems - I
haven't tried myself, but my collegues reported problems doing that). 

--
Mats
>       Finally, working with LVM images I can help as we have needed to
> do that as well.  There is a good article at:
>       
> http://www.campworld.net/thewiki/pmwiki.php/Linux/DiskImagesHOWTO
>        Sorry, I could not help with the other items, hopefully
> somebody else on the list will have more information for us.
> 
> -----Original Message-----
> From: xen-users-bounces@xxxxxxxxxxxxxxxxxxx
> [mailto:xen-users-bounces@xxxxxxxxxxxxxxxxxxx] On Behalf Of Didier
> Trosset
> Sent: Friday, April 27, 2007 8:56 AM
> To: xen-users@xxxxxxxxxxxxxxxxxxx
> Subject: [Xen-users] Debian/Xen usage summary
> 
> Hello,
> 
> I just setup a few virtual systems, and I came into some limitations
> using 
> XEN. I'd like to share these, just to know if the limitations 
> are in the
> 
> system or in the user :)
> 
> I am using a standard Debian 4.0 (etch) GNU/Linux distribiution. The
> system 
> is an Intel Core 2 duo with virtualization inside. I use the amd64
> flavour. 
> Thus my kernel is '2.6.18-4-xen-amd64'. Xen is version 3.0.3.
> 
> Using paravirtualization, I can start only other 64 bits guests.
>    Either the standard linux-image-2.6.18-4-xen-amd64,
>    Or a specially compiled one, with IDE included not-as-module
>    I cannot manage to start the ubuntu 7.04 kernels for amd64.
> 
> Using full virtualization (hvm), I can start only 32 bits guests.
>    Starting on an ISO of a Debian amd64 or Fedora x86_64 
> install fails.
>    But starting on a i386 ISO of these allows the install to run OK.
> 
> Using hvm, I have to use ioemu for the network card to be recognized.
>    vif = [ "type=ioemu, ip=..." ] without it, the card is not 
> detected.
>    BTW, I am using NAT for networking.
> 
> Then, starting to use all of these is a bit of a nightmare. 
> Indeed hvm 
> systems do start but does not show the SDL display. xm list 
> reports the 
> system as running, but I have no access to it (and network not yet 
> configured) although it was present during install. Don't know what
> happends 
> here. I'd be glad for some hints?
> 
> One more question, how to mount inside dom0 a logical volume that is
> given 
> as a whole disk to a hvm (which had it partitionned).
> 
> Thanks in advance
> Didier
> 
> -- 
> Didier Trosset-Moreau
> Agilent Technologies
> Geneva, Switzerland
> 
> _______________________________________________
> Xen-users mailing list
> Xen-users@xxxxxxxxxxxxxxxxxxx
> http://lists.xensource.com/xen-users
> 
> 
> _______________________________________________
> Xen-users mailing list
> Xen-users@xxxxxxxxxxxxxxxxxxx
> http://lists.xensource.com/xen-users
> 
> 
> 



_______________________________________________
Xen-users mailing list
Xen-users@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-users


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.