[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-users] Plans to require CPU VT flag?


  • To: "xen-users@xxxxxxxxxxxxxxxxxxxx" <xen-users@xxxxxxxxxxxxxxxxxxxx>
  • From: Christopher Myers <cmyers@xxxxxxxxxxxx>
  • Date: Wed, 20 Dec 2017 16:47:49 +0000
  • Accept-language: en-US
  • Authentication-results: spf=none (sender IP is ) smtp.mailfrom=cmyers@xxxxxxxxxxxx;
  • Delivery-date: Wed, 20 Dec 2017 16:48:13 +0000
  • List-id: Xen user discussion <xen-users.lists.xenproject.org>
  • Spamdiagnosticmetadata: NSPM
  • Spamdiagnosticoutput: 1:99
  • Thread-index: AQHTeRSNKP9UdV/vDE6uhVChT9j1MaNMAACAgABH34CAACM0gIAAB0UA
  • Thread-topic: [Xen-users] Plans to require CPU VT flag?

-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA512



On Wed, 2017-12-20 at 16:21 +0000, George Dunlap wrote:
> On Wed, Dec 20, 2017 at 2:15 PM, Christopher Myers <cmyers@millikin.e
> du> wrote:
> > -----BEGIN PGP SIGNED MESSAGE-----
> > Hash: SHA512
> > 
> > 
> > 
> > On Wed, 2017-12-20 at 09:58 +0000, George Dunlap wrote:
> > > On Tue, Dec 19, 2017 at 9:58 PM, Christopher Myers <cmyers@millik
> > > in.e
> > > du> wrote:
> > > > -----BEGIN PGP SIGNED MESSAGE-----
> > > > Hash: SHA512
> > > > 
> > > > Out of curiosity, are there any plans to *require* the VT CPU
> > > > flag
> > > > in
> > > > Xen at any point in the foreseeable future? The reason I ask is
> > > > because
> > > > my server at home doesn't support them, so I want to beware of
> > > > any
> > > > version that would break my current setup.
> > > > 
> > > > Right now I've got four PV VMs running on my Xen box (4.8.3 on
> > > > Debian
> > > > Stretch,) and am perfectly content with its performance :)
> > > 
> > > At the moment it's not possible to boot Xen without a PV domain 0
> > > (on
> > > x86). :-)
> > > 
> > > We are working on allowing PVH dom0 (which requires HVM*
> > > support),
> > > but
> > > that won't be ready until 4.11 or 4.12; even if we were planning
> > > on
> > > phasing it out, it wouldn't be possible for several years.
> > > 
> > > But, there is no intention at this point of phasing out PV.  As
> > > you
> > > say, there continues to be lots of x86 hardware (even new
> > > hardware)
> > > that doesn't have HVM support; for those platforms Xen will be
> > > basically the only option.
> > 
> > Awesome, thanks very much :)
> > 
> > It really is amazing how much you can do with Xen. My setup is an
> > Aaeon
> > EMB CV1 A11 industrial motherboard (Atom D2550 processor) with 4GB
> > of
> > memory. On that I'm able to run four Debian Stretch PV DomU's
> > without
> > issue --
> >  - asterisk VOIP server
> >  - nginx reverse proxy
> >  - dedicated bind9 VM
> >  - "the everything" vm, running the usual LAMP stack, minecraft
> > server,
> > rsyslog aggregator, nextcloud, secondary bind9 instance, email
> > server,
> > mantisbt, and about a half dozen other applications.
> > 
> > When you think about the fact that this is, in reality, running off
> > of
> > two (not overly powerful) CPU cores, and performs very smoothly on
> > top
> > of all that...
> 
> That's good feedback, thanks.  Intel Atom always comes up in
> discussions about why we need to keep PV, but I think until now it
> was
> always theoretical ("someone may want to do X").  Having at least one
> concrete user who has actually used X makes it a lot easier to
> justify
> supporting X. :-)


You're very welcome! I very much appreciate all the work that's gone
into this spectacular project over the years!

Personally, I think it's the perfect combination for a small
environment. I used to do the whole raspberry pi route, but it became
more cumbersome and didn't offer nearly enough flexibility. But doing
XEN on that tiny little board, combined with an SSD and LVM, gives me
an awesome environment for my family's use. I think that a setup like
this would be equally well-suited for small offices.

Performance is quite good too; I just recently switched the NextCloud
install over from my old environment, and shoved around 100GB of data
down its gullet through HTTPS transactions (with the nginx VM serving
up the SSL offloading,) and it didn't miss a beat.

I'm most amazed about memory usage though, this totally blows my mind
- -- full linux VMs, actually doing stuff, using in the tens of
megabytes?! I've even got 384MB of memory that hasn't even been
allocated to anything yet.


dom0:
$ free -h
         total   used   free   shared  buff/cache   avail
able
Mem:      424M   103M   179M     1.6M        140M        306M
Swap: 
    1.9G   1.3M   1.9G


The nginx VM:
$ free -h          
         total   used   free   shared  buff/cache   available
Mem:      365M    37M   158M     4.3M        168M        313M
Swap:     511M     0B   511M


The primary DNS VM:
$ free -h
         total   used   free   shared  buff/cache   available
Mem:      365M    59M   8.3M     4.1M        297M        292M
Swap:     511M    12K   511M



The asterisk VM:
$ free -h
         total   used   free   shared  buff/cache   available
Mem:      365M    77M   127M     1.7M        160M        276M
Swap:     511M     0B   511M



The everything VM:
$ free -h
         total   used   free   shared  buff/cache   available
Mem:      1.9G   817M    28M     264M        1.1G        882M
Swap:     511M    32M   479M
((NOTE that this includes a 256MB ram disk for the Minecraft server.))




> 
>  -George
> 
-----BEGIN PGP SIGNATURE-----

iQEzBAEBCgAdFiEE7GM/Dul8WSWn72odQ1nEo4DFCIUFAlo6lC4ACgkQQ1nEo4DF
CIVimAf/Thj9siKdzqXqs+hhy89dX1MZiBIi6YQnfK7p/kpkrOP7g2cBgv85f26p
IlbjcqD37Vw05HWX2r2W9hQ6R1Z/oMGAvIcxrrCYYHLGi8PqTQZ+ietYKO7l+TbZ
Imf6Ta9D8mDBteLuzO/yfTUS7QyydBPw0R9FO+QW0sSDqQWp1KaYk9EDXT0y+il3
T5Fn6zz5V1pmfWLmaLf1EfUhOYOe5/E/Y/2xz3AY8TgY+itkfQHRUjMLXvGEBTnW
g6NDQc6fwJW3MtzWGPdkg/ckhMUG3lPAJ3y/4RS3/sdjDXFZulf579HuSwX0+KXc
XyhJlMOLWln1+2qrBYzK4EvIh82Pbg==
=cIft
-----END PGP SIGNATURE-----
_______________________________________________
Xen-users mailing list
Xen-users@xxxxxxxxxxxxxxxxxxxx
https://lists.xenproject.org/mailman/listinfo/xen-users

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.