[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[Xen-API] Fwd: XCP 1.0 and openvswitch problem high memory


  • To: xen-api@xxxxxxxxxxxxxxxxxxx
  • From: Nycko <nyckopro@xxxxxxxxx>
  • Date: Mon, 4 Jun 2012 15:48:09 -0300
  • Delivery-date: Mon, 04 Jun 2012 18:48:21 +0000
  • List-id: User and development list for XCP and XAPI <xen-api.lists.xen.org>

---------- Forwarded message ----------
From: Nycko <nyckopro@xxxxxxxxx>
Date: Mon, Jun 4, 2012 at 8:57 AM
Subject: Re: [Xen-API] XCP 1.0 and openvswitch problem high memory
To: George Shuklin <george.shuklin@xxxxxxxxx>


On Mon, May 28, 2012 at 6:25 PM, Nycko <nyckopro@xxxxxxxxx> wrote:
> On Mon, May 7, 2012 at 5:57 PM, George Shuklin <george.shuklin@xxxxxxxxx> 
> wrote:
>> I've got that problem periodically (XCP 0.5 we still using for older pool in
>> product) in case of low resources for dom0. Solution: add more cpu to dom0,
>> raise memory for dom0.
>>
>> After relaxing resource problem I've never get that condition for prolonged
>> time (>4 month) under pretty high loaded pool.
>>
>>
>> On 07.05.2012 16:35, Nycko wrote:
>>>
>>> Hello, I have a problem with XCP 1.0. In recent weeks I saw very
>>> resource openvswitch cpu and memory, no memory. At point (after a
>>> while, less than a week) reaches the limit (512) and also begins to
>>> swap out the point of leaving inoperative the real host and hence
>>> their virtual machines. Everything returns to normal when I restart
>>> the daemon (/ etc / init.d / openvswitch) but I can not be doing it
>>> all the time. Someone can give me some tips to keep looking?
>>>
>>> Leave some logs that may be useful:
>>> #tail -f /var/log/openvswitch/ovs-vswitchd.log
>>> May 07 09:28:36|47029|timeval|WARN|6 ms poll interval (0 ms user, 0 ms
>>> system) is over 9 times the weighted mean interval 1 ms (3462238
>>> samples)
>>> May 07 09:28:36|47030|timeval|WARN|context switches: 0 voluntary, 3
>>> involuntary
>>> May 07 09:28:36|47031|coverage|INFO|Skipping details of duplicate
>>> event coverage for hash=ba09b798 in epoch 3462238
>>>
>>> If you need any more information do not hesitate to ask me because I
>>> can not find the way to the solution and I'm several weeks into this.
>>>
>>> PD: my english sucks
>>>
>>> Regards
>
> I still have the same problem, I put in a server lab xcp 1.1 and
> without virtual machines and 11 days, openvswitch this in 60% of
> memory (do nothing)
> Exactly what happens to me in this thread[1] is ask and the final
> answer is: upgrade
> Increasing the amount of memory to dom0 I only stretched the time of
> explosion, however I would like to know that I have to change to
> increase the dom0_mem a bit more (now I have in /etc/extlinux.conf the
> dom0_mem = 725 but really I have less.)
> Any ideas?
>
> [1] 
> http://old-list-archives.xen.org/archives/html/xen-users/2011-06/msg00614.html

I added more memory to my dom0 and not hung for a week. I still think
this is just kicking the ball a little further but sooner or later
will explode. OVS consumption in terms of CPU and MEM grows without
stopping. I think the question comes from a configuration I have in a
port of a sw connect where my host's XCP with the trunk mode
configuration of vlans without specifying which as indicated here[1]
and seeing the hotfix[2] xenserver believe that there comes my
problem. So my question is: I can apply this patch XCP? Agree with me
that the problem can come from there?
[1] http://forums.citrix.com/thread.jspa?threadID=294649&start=15&tstart=0
[2] http://support.citrix.com/article/CTX131745

Regards
--
nycko


-- 
nycko

_______________________________________________
Xen-api mailing list
Xen-api@xxxxxxxxxxxxx
http://lists.xen.org/cgi-bin/mailman/listinfo/xen-api


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.