[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-users] Xen-users Digest, Vol 90, Issue 4



On Fri, Aug 3, 2012 at 1:37 PM, Salvatore Maltese <inf.maltese@xxxxxxxxx> wrote:
> Sorry iam newb on virtualization , I must use Xen for a project of
> Distributed and Cooperative backup application and I would simulate it with
> virtual machines hence with XEN. Unfortunally iam tring since two week for
> install it . I would know something form started , because iam becoming flip
> . I would use as Domain 0 linux mint 13 32 bit .
> i had seen some package , but i am not be able to start by boot . please
> help me . And sorry for my terrible English. thx

Do you mean your host (dom0) cannot boot?  Are you using packages from
Linux Mint 13?

The best thing is to choose a distribution that has decent Xen support
out-of-the-box.  Debian squeeze and Ubuntu 12.04 are both good
options; I think recent Fedoras have good Xen support as well.

 -George

>
>
> On Fri, Aug 3, 2012 at 2:00 PM, <xen-users-request@xxxxxxxxxxxxx> wrote:
>>
>> Send Xen-users mailing list submissions to
>>         xen-users@xxxxxxxxxxxxx
>>
>> To subscribe or unsubscribe via the World Wide Web, visit
>>         http://lists.xen.org/cgi-bin/mailman/listinfo/xen-users
>> or, via email, send a message with subject or body 'help' to
>>         xen-users-request@xxxxxxxxxxxxx
>>
>> You can reach the person managing the list at
>>         xen-users-owner@xxxxxxxxxxxxx
>>
>> When replying, please edit your Subject line so it is more specific
>> than "Re: Contents of Xen-users digest..."
>>
>>
>> Today's Topics:
>>
>>    1. Re: any opensource controllers to recommand? (Joseph Glanville)
>>    2. Re: XEN HA Cluster with LVM fencing and live migration ? The
>>       right way ? (Joseph Glanville)
>>    3. settinjg dom0 memory, where did I go wrong? (Michael Egglestone)
>>    4. Re: settinjg dom0 memory, where did I go wrong?
>>       (Emmanuel COURCELLE)
>>    5. Shoehorning a domU and "missing" memory (Xen 4.0.1) (Andrew Wade)
>>    6. Re: Transcendent Memory ("tmem") -capable kernel now publicly
>>       released (gavin gao)
>>    7. Re: Transcendent Memory ("tmem") -capable kernel now publicly
>>       released (Stephan Seitz)
>>
>>
>> ----------------------------------------------------------------------
>>
>> Message: 1
>> Date: Fri, 3 Aug 2012 10:07:49 +1000
>> From: Joseph Glanville <joseph.glanville@xxxxxxxxxxxxxx>
>> To: David Erickson <halcyon1981@xxxxxxxxx>
>> Cc: yue wang <heuye.wang@xxxxxxxxx>, xen-users@xxxxxxxxxxxxx
>> Subject: Re: [Xen-users] any opensource controllers to recommand?
>> Message-ID:
>>
>> <CAOzFzEjOtm2YFZO6frKjH8bLwSUYfiCL0F6Yc2=2iF-9VTWebw@xxxxxxxxxxxxxx>
>> Content-Type: text/plain; charset="utf-8"
>>
>> On 2 August 2012 10:39, David Erickson <halcyon1981@xxxxxxxxx> wrote:
>>
>> > I am biased as the author of Beacon (http://www.beaconcontroller.net/),
>> > but I have been using it with a cluster of 80 XenServer machines running
>> > OVS, interconnected by physical OpenFlow switches for over a year.
>> >
>>
>> I'll second Beacon as a good choice. Great performance and probably the
>> most featureful of the open-source controllers.
>>
>> If you want to get hacking on stuff really fast then there is NOX which
>> though not as tanky as Beacon lets you write extensions in Python, which
>> is
>> imo a big plus.
>>
>> There is also this project which I am yet to try out but it looks
>> interesting:
>>
>> https://github.com/trema
>>
>> It seems to be the remnants of the NEC Helios controller. (which to my
>> knowledge was never available anywhere).
>>
>>
>> >
>> > On Tue, Jul 31, 2012 at 7:57 PM, yue wang <heuye.wang@xxxxxxxxx> wrote:
>> >
>> >> Hi, All
>> >>
>> >> do you have any opensource controllers to recommand?
>> >> since XCP don't have a controller to control OVS centrally,i need a
>> >> controller like vswitch controller for xenserver.
>> >> there are so many open source openflow controller, i really don't know
>> >> which one to choose.
>> >>
>> >> thanks in advance[?]
>> >>
>> >> _______________________________________________
>> >> Xen-users mailing list
>> >> Xen-users@xxxxxxxxxxxxx
>> >> http://lists.xen.org/xen-users
>> >>
>> >
>> >
>> > _______________________________________________
>> > Xen-users mailing list
>> > Xen-users@xxxxxxxxxxxxx
>> > http://lists.xen.org/xen-users
>> >
>>
>>
>>
>> --
>> CTO | Orion Virtualisation Solutions | www.orionvm.com.au
>> Phone: 1300 56 99 52 | Mobile: 0428 754 846
>> -------------- next part --------------
>> An HTML attachment was scrubbed...
>> URL:
>> <http://lists.xen.org/archives/html/xen-users/attachments/20120803/80feeefd/attachment.html>
>> -------------- next part --------------
>> A non-text attachment was scrubbed...
>> Name: 360.gif
>> Type: image/gif
>> Size: 453 bytes
>> Desc: not available
>> URL:
>> <http://lists.xen.org/archives/html/xen-users/attachments/20120803/80feeefd/attachment.gif>
>>
>> ------------------------------
>>
>> Message: 2
>> Date: Fri, 3 Aug 2012 10:14:23 +1000
>> From: Joseph Glanville <joseph.glanville@xxxxxxxxxxxxxx>
>> To: Herve Roux <vevroux@xxxxxxx>
>> Cc: xen-users@xxxxxxxxxxxxx
>> Subject: Re: [Xen-users] XEN HA Cluster with LVM fencing and live
>>         migration ? The right way ?
>> Message-ID:
>>
>> <CAOzFzEgvYFqyFKVCHDXKMvSiwx1Lfq0M6aQgAS4V3zJfZ=oPfg@xxxxxxxxxxxxxx>
>> Content-Type: text/plain; charset=windows-1252
>>
>> On 2 August 2012 18:19, Herve Roux <vevroux@xxxxxxx> wrote:
>> > Hi,
>> >
>> >
>> >
>> > I am trying to build a rock solid XEN High availability cluster. The
>> > platform is SLES 11 SP1 running on 2 HP DL585 both connected through HBA
>> > fiber channel to the SAN (HP EVA).
>> >
>> > XEN is running smoothly and I?m even amazed with the live migration
>> > performances (this is the first time I have the chance to try it in such
>> > a
>> > nice environment).
>> >
>> > XEN apart the SLES heartbeat cluster is running fine as well and they
>> > both
>> > interact nicely.
>> >
>> > Where I?m having some doubts is regarding the storage layout. I have
>> > tried
>> > several configurations but each time I have to compromise. And here is
>> > the
>> > problem, I don?t like to compromise ;)
>> >
>> >
>> >
>> > First I?ve tried to use a SAN LUN per Guest (using directly the
>> > multipath dm
>> > device as phy disk ). This is working nicely, live migration works fine,
>> > easy setup even if the multipath.conf can get a bit fussy with the
>> > growing
>> > number of LUNs : but no fencing at all, I can start the VM on both node
>> > and
>> > this is BAD!
>> >
>> > Then I?ve tried to used cLVM on top of the multipath. I?ve managed to
>> > get
>> > cLVM up and running pretty easily in the cluster environment.
>> >
>> > From here to way of thinking:
>> >
>> > 1.       One big SR on the SAN split into LV that I can use for my VM. A
>> > huge step forward flexibility, no need to reconfigure the SAN each time?
>> > Still with this solution the SR VG is open in shared mode between the
>> > nodes
>> > and I don?t have low level lock of the storage. I can start a VM two
>> > time
>> > and this is bad bad bad?
>> >
>> > 2.       In order to provide fencing at the LVM level I can take another
>> > approach: 1 VG per volume an open it in exclusive mode. The volume will
>> > be
>> > active on one node at a time and I have no risk of data corruption.  The
>> > cluster will be in charge of balancing to volume when migrating VM from
>> > one
>> > node to the  other. But here the live migration is not working, and this
>> > S?
>> >
>> >
>> >
>> > I was wondering what approach others have taken and if they is something
>> > I?m
>> > missing.
>> >
>> > I?ve looked into the XEN locking system but from my point of view the
>> > risk
>> > of dead lock is not ideal as well. From my point of view a DLM XEN
>> > locking
>> > system will be a good one, I don?t know if some work have been done in
>> > the
>> > domain?
>> >
>> >
>> >
>> > Thanks in advance
>> >
>> > Herve
>> >
>> >
>> >
>> >
>> >
>> >
>> > _______________________________________________
>> > Xen-users mailing list
>> > Xen-users@xxxxxxxxxxxxx
>> > http://lists.xen.org/xen-users
>>
>> Personally I would steer clear of cLVM.
>>
>> If your SAN provides nice programatic control you can possibly
>> integrate it's ability to do fencing into Pacemaker/Linux-HA stacking
>> using a custom OCF (not a huge amount of work usually).
>> This would give you everything you want but is mainly determined by
>> how "open" your SAN is.
>>
>> Alternatively you can run with no storage layer fencing and make sure
>> you have proper STONITH in play.
>> This is pretty easy with Pacemaker/Corosync stack and can be done in
>> alot of ways. If you are pretty sure of your stacks stability (no
>> deadlocks/kernel oops etc) then you can just use SSH STONITH.
>> However if you want to be really damn sure then you can use IP PDU,
>> USB poweroff etc.
>>
>> It's all about how sure you want to be/how much money/time you have.
>>
>> Joseph.
>> --
>> CTO | Orion Virtualisation Solutions | www.orionvm.com.au
>> Phone: 1300 56 99 52 | Mobile: 0428 754 846
>>
>>
>>
>> ------------------------------
>>
>> Message: 3
>> Date: Thu, 02 Aug 2012 21:48:22 -0700
>> From: "Michael Egglestone" <mike@xxxxxxxxx>
>> To: xen-users@xxxxxxxxxxxxx
>> Subject: [Xen-users] settinjg dom0 memory, where did I go wrong?
>> Message-ID: <fc.00000001e6690c0f00000001e6690c0f.e6690c10@xxxxxxxxx>
>> Content-Type: text/plain; charset="utf-8"
>>
>> Hello,
>> I'm trying to set dom0 to 4G of memory.
>> (Let me quickly say I don't know if 4G of RAM for dom0 is good idea, but I
>> thought I would try it, please advise otherwise)  :)
>> My system has 32Gig of memory. (Xeon's with Debian 6.0.5)
>>
>> Here is my /etc/default/grub
>>
>> GRUB_CMDLINE_XEN_DEFAULT="dom0_mem=4096M dom0_vcpus_pin"
>>
>> Here is my /etc/xen/xend-config.sxp
>>
>> [snip]
>> # dom0-min-mem is the lowest permissible memory level (in MB) for dom0.
>> # This is a minimum both for auto-ballooning (as enabled by
>> # enable-dom0-ballooning below) and for xm mem-set when applied to dom0.
>> (dom0-min-mem 196)
>>
>> # Whether to enable auto-ballooning of dom0 to allow domUs to be created.
>> # If enable-dom0-ballooning = no, dom0 will never balloon out.
>> (enable-dom0-ballooning no)
>>
>> # 32-bit paravirtual domains can only consume physical
>> # memory below 168GB. On systems with memory beyond that address,
>> # they'll be confined to memory below 128GB.
>> # Using total_available_memory (in GB) to specify the amount of memory
>> reserved
>> # in the memory pool exclusively for 32-bit paravirtual domains.
>> # Additionally you should use dom0_mem = <-Value> as a parameter in
>> # xen kernel to reserve the memory for 32-bit paravirtual domains, default
>> # is "0" (0GB).
>> (total_available_memory 0)
>>
>> # In SMP system, dom0 will use dom0-cpus # of CPUS
>> # If dom0-cpus = 0, dom0 will take all cpus available
>> (dom0-cpus 4)
>> [snip]
>>
>> I've updated grub to populate /boot/grub/grub.cfg and then rebooted.
>> It boots, and then I run top on my dom0 which shows this:
>>
>> Tasks: 153 total,   1 running, 152 sleeping,   0 stopped,   0 zombie
>> Cpu(s):  0.1%us,  0.3%sy,  0.0%ni, 96.9%id,  1.8%wa,  0.0%hi,  0.0%si,
>> 0.9%st
>> Mem:   2267508k total,  1641848k used,   625660k free,    88140k buffers
>> Swap:        0k total,        0k used,        0k free,  1279128k cached
>>
>> Why do I only have 2Gig of RAM?
>>
>> Here are my domU's...
>>
>> root@xen:/etc/xen# xm list
>> Name                                        ID   Mem VCPUs      State
>> Time(s)
>> Domain-0                                     0  2557     4     r-----
>> 252.0
>> debian-central                               1  1024     2     -b----
>> 22.2
>> debian-cms                                   2  4096     2     -b----
>> 186.7
>> debian-ldap                                  3  1024     2     -b----
>> 4.6
>> debian-ts                                    4  1024     2     -b----
>> 42.8
>> redhat-sdsweb                                5  4096     4     -b----
>> 58.7
>> w2k3-awards                                  6  2048     2     -b----
>> 225.4
>> w2k3-sme                                     8  2048     2     -b----
>> 33.5
>> w2k8-sme                                     7  2048     2     -b----
>> 52.6
>> root@xen:/etc/xen#
>>
>> root@xen:/etc/xen# xm info
>> host                   : xen.sd57.bc.ca
>> release                : 2.6.32-5-xen-amd64
>> version                : #1 SMP Sun May 6 08:57:29 UTC 2012
>> machine                : x86_64
>> nr_cpus                : 24
>> nr_nodes               : 2
>> cores_per_socket       : 6
>> threads_per_core       : 2
>> cpu_mhz                : 2800
>> hw_caps                :
>> bfebfbff:2c100800:00000000:00001f40:029ee3ff:00000000:00000001:00000000
>> virt_caps              : hvm hvm_directio
>> total_memory           : 32704
>> free_memory            : 12115
>> node_to_cpu            : node0:0,2,4,6,8,10,12,14,16,18,20,22
>>                          node1:1,3,5,7,9,11,13,15,17,19,21,23
>> node_to_memory         : node0:2172
>>                          node1:9942
>> node_to_dma32_mem      : node0:2172
>>                          node1:0
>> max_node_id            : 1
>> xen_major              : 4
>> xen_minor              : 0
>> xen_extra              : .1
>> xen_caps               : xen-3.0-x86_64 xen-3.0-x86_32p hvm-3.0-x86_32
>> hvm-3.0-x86_32p hvm-3.0-x86_64
>> xen_scheduler          : credit
>> xen_pagesize           : 4096
>> platform_params        : virt_start=0xffff800000000000
>> xen_changeset          : unavailable
>> xen_commandline        : placeholder dom0_mem=4096M dom0_vcpus_pin
>> cc_compiler            : gcc version 4.4.5 (Debian 4.4.5-8)
>> cc_compile_by          : fw
>> cc_compile_domain      : deneb.enyo.de
>> cc_compile_date        : Thu Jun 21 06:41:09 UTC 2012
>> xend_config_format     : 4
>> root@xen:/etc/xen#
>>
>> Thanks for your advice!
>>
>> Cheers,
>> Mike
>>
>> -------------- next part --------------
>> An HTML attachment was scrubbed...
>> URL:
>> <http://lists.xen.org/archives/html/xen-users/attachments/20120802/1c27f98b/attachment.html>
>>
>> ------------------------------
>>
>> Message: 4
>> Date: Fri, 03 Aug 2012 08:49:39 +0200
>> From: Emmanuel COURCELLE <emmanuel.courcelle@xxxxxxxxxxxxxxxx>
>> To: xen-users@xxxxxxxxxxxxx
>> Subject: Re: [Xen-users] settinjg dom0 memory, where did I go wrong?
>> Message-ID: <501B7483.6030908@xxxxxxxxxxxxxxxx>
>> Content-Type: text/plain; charset="iso-8859-1"; Format="flowed"
>>
>> Le 03/08/2012 06:48, Michael Egglestone a ?crit :
>> > Hello,
>> > I'm trying to set dom0 to 4G of memory.
>> > (Let me quickly say I don't know if 4G of RAM for dom0 is good idea,
>> > but I thought I would try it, please advise otherwise)  :)
>> > My system has 32Gig of memory. (Xeon's with Debian 6.0.5)
>> >
>> Hello
>>
>> I think you're running Debian ? Which version ?
>>
>> We installed recently a server with 256Gb memory, Debian testing/xen
>> 4.1/kernel 3.3.4 (downloaded from kernel.org) to be able to get a guest
>> with as more as 200Gb memory, and all this stuff works with as low as
>> 512Mb for dom0 !
>>
>> However, top shows KiB Mem:    354520, free shows 354520 also,
>>
>> BUT
>>
>> xm top shows 523912K
>>
>> As far as I understand, xm top (or the equivalent if you'r using the xl
>> stack) is a better tool than top and others for monitoring dom0.
>>
>> Sincerely,
>>
>> --
>> Emmanuel COURCELLE                emmanuel.courcelle@xxxxxxxxxxxxxxxx
>> L.I.P.M. (UMR CNRS-INRA 2594/441) tel (33) 5-61-28-54-50
>> I.N.R.A. - 24 chemin de Borde Rouge - Auzeville
>> CS52627 - 31326 CASTANET TOLOSAN Cedex - FRANCE
>>
>> -------------- next part --------------
>> An HTML attachment was scrubbed...
>> URL:
>> <http://lists.xen.org/archives/html/xen-users/attachments/20120803/ab169856/attachment.html>
>>
>> ------------------------------
>>
>> Message: 5
>> Date: Fri, 03 Aug 2012 11:33:23 +0100
>> From: Andrew Wade <andrew@xxxxxxxxxx>
>> To: xen-users@xxxxxxxxxxxxx
>> Subject: [Xen-users] Shoehorning a domU and "missing" memory (Xen
>>         4.0.1)
>> Message-ID: <501BA8F3.6060701@xxxxxxxxxx>
>> Content-Type: text/plain; charset=ISO-8859-1
>>
>> Hi,
>>
>> I'm seeing an issue with 4.0.1 on Debian 6 regarding "missing" memory.
>>
>> My set up:
>>
>>  * Xen 4.0.1 on Debian 6.0.5
>>  * dom0-min-mem 512, enable-dom0-ballooning no, dom0_mem=512M (GRUB
>> config). (I also tried with dom0-min-mem 0)
>>  * Host server has 32GB RAM
>>
>> # xm list
>> Name                                        ID   Mem VCPUs      State
>> Time(s)
>> Domain-0                                     0   501     8     r-----
>>   12.0
>>
>> # xm info|grep memory
>> total_memory           : 32758
>> free_memory            : 31832
>>
>> 32758 (total) - 31832 ('free_memory') - 512 (dom0) = 414MB unaccounted.
>> (No domus are running)
>>
>> I created an HVM domU with 31488 MB RAM (N.B. this is less than 31832
>> reported by xm info free_memory plus I calculated about ~250MB memory
>> overhead) and 4 VCPUs but it wouldn't start due to insufficient memory
>> available. I expected it to fit.
>>
>> Is there an official calculation for the memory overhead (for
>> tables/caches etc)?
>>
>> Can anyone explain why a domu with 31,488MB won't start when 31,832MB is
>> free? I'm trying to calculate the most amount of RAM a domU can have
>> (i.e. to occupy an entire server)
>>
>> Thanks.
>>
>> --
>> Andrew Wade
>>
>> Memset Ltd., registration number 4504980. 25 Frederick Sanger Road,
>> Guildford, Surrey, GU2 7YD, UK.
>>
>>
>>
>> ------------------------------
>>
>> Message: 6
>> Date: Fri, 3 Aug 2012 04:23:15 -0700 (PDT)
>> From: gavin gao <gavin20112012@xxxxxxxxxxxx>
>> To: xen-users@xxxxxxxxxxxxxxxxxxx
>> Subject: Re: [Xen-users] Transcendent Memory ("tmem") -capable kernel
>>         now publicly released
>> Message-ID: <1343992995864-5710502.post@xxxxxxxxxxxxx>
>> Content-Type: text/plain; charset=us-ascii
>>
>>
>>
>> Hi  everyone
>> This feature is very cool.When I execute "make linux kernel" on my
>> VM(vcpus=4,memory=256M to simulate memory pressure),it take almost 30
>> hours,
>> and tmem reduce this time to 2 hours~~~
>>
>> I am  go on testing it~~~
>>
>>
>>
>> Gavin
>>
>>
>>
>> --
>> View this message in context:
>> http://xen.1045712.n5.nabble.com/Transcendent-Memory-tmem-capable-kernel-now-publicly-released-tp5587302p5710502.html
>> Sent from the Xen - User mailing list archive at Nabble.com.
>>
>>
>>
>> ------------------------------
>>
>> Message: 7
>> Date: Fri, 3 Aug 2012 11:47:22 +0000
>> From: Stephan Seitz <s.seitz@xxxxxxxxxxx>
>> To: gavin gao <gavin20112012@xxxxxxxxxxxx>
>> Cc: "xen-users@xxxxxxxxxxxxxxxxxxx" <xen-users@xxxxxxxxxxxxxxxxxxx>
>> Subject: Re: [Xen-users] Transcendent Memory ("tmem") -capable kernel
>>         now publicly released
>> Message-ID: <1343994442.9254.2.camel@wotan2>
>> Content-Type: text/plain; charset="utf-8"
>>
>> Hi,
>>
>> how are you implementing it?
>>
>> I'm currently using it on test machines with
>>     xen bootargs tmem tmem_dedup tmem_compress
>> and
>>     dom0 kernel bootargs tmem
>>
>> the domUs also have recent kernels with tmem bootargs as well as the
>> zcache module loaded.
>>
>> I noticed very high memory latency when overcommiting memory.
>>
>> cheers,
>>
>> Stephan
>>
>> -------------- next part --------------
>> An HTML attachment was scrubbed...
>> URL:
>> <http://lists.xen.org/archives/html/xen-users/attachments/20120803/40f47ca8/attachment.html>
>> -------------- next part --------------
>> A non-text attachment was scrubbed...
>> Name: signature.asc
>> Type: application/pgp-signature
>> Size: 490 bytes
>> Desc: This is a digitally signed message part
>> URL:
>> <http://lists.xen.org/archives/html/xen-users/attachments/20120803/40f47ca8/attachment.pgp>
>>
>> ------------------------------
>>
>> _______________________________________________
>> Xen-users mailing list
>> Xen-users@xxxxxxxxxxxxx
>> http://lists.xen.org/xen-users
>>
>>
>> End of Xen-users Digest, Vol 90, Issue 4
>> ****************************************
>
>
>
> _______________________________________________
> Xen-users mailing list
> Xen-users@xxxxxxxxxxxxx
> http://lists.xen.org/xen-users

_______________________________________________
Xen-users mailing list
Xen-users@xxxxxxxxxxxxx
http://lists.xen.org/xen-users


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.