[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-users] Remus crashes



Hello,
Yes, everything works great with non-intensive load. Also, maybe remus
does not try to do traffic shapin with non-intensive load :)

Another suggestion, so :)
Does vif1.0 exists ? As remus tries to execute tc command on vif1.0
interface, you may have an issue here.

Regards,
JB

Le 17/11/2010 14:45, Darko PetroviÄ a Ãcrit :
> Thanks for your help, JB.
> Yes, I would say that it is installed. The "tc" command does exist.
> After all, everything works correctly with non-intensive workloads.
> 
> Any other suggestions, anyone?
> 
> On 11/16/2010 09:31 PM, Jean Baptiste FAVRE wrote:
>> Hello,
>> Seems that remus tries to use "tc" command. Think it's for traffic
>> shaping to take priority.
>> Is the package installed on your dom0 ?
>>
>> Regards,
>> JB
>>
>> Le 16/11/2010 16:45, Darko PetroviÄ a Ãcrit :
>>   
>>> Hello everyone,
>>>
>>> I've managed to configure and start Remus. However, it works only while
>>> the protected server is not loaded. As soon as I start a
>>> memory-intensive server application, the stream of Remus messages stops,
>>> leaving me with an error message. I think the message is not always the
>>> same, but here is what I've got from the last run:
>>>
>>> PROF: suspending at 1289921328.328959
>>> PROF: resumed at 1289921330.903716
>>> xc: error: Error when flushing output buffer (32 = Broken pipe):
>>> Internal error
>>> tc filter del dev vif1.0 parent ffff: proto ip pref 10 u32
>>> RTNETLINK answers: Invalid argument
>>> We have an error talking to the kernel
>>> Exception xen.remus.util.PipeException: PipeException('tc failed: 2, No
>>> such file or directory',) in<bound method BufferedNIC.__del__ of
>>> <xen.remus.device.BufferedNIC object at 0x7f27844df210>>  ignored
>>>
>>> I have tried to pin one physical core to Domain 0 and another one to the
>>> protected domain, using vcpu-pin but it doesn't help.
>>> Currently I am running the Xen-unstable tree (last updated 3-4 days
>>> ago), 2.6.32.25 pv-ops kernel as Dom-0 and 2.6.18.8 as Dom-U, but I had
>>> a very similar problem with Xen 4.0.0 and 2.6.18.8 as Dom-0.
>>>
>>> Any suggestions?
>>>
>>> Thanks
>>> Darko
>>>
>>> _______________________________________________
>>> Xen-users mailing list
>>> Xen-users@xxxxxxxxxxxxxxxxxxx
>>> http://lists.xensource.com/xen-users
>>>
>>>      
>>
>> _______________________________________________
>> Xen-users mailing list
>> Xen-users@xxxxxxxxxxxxxxxxxxx
>> http://lists.xensource.com/xen-users
>>    
> 
> 
> _______________________________________________
> Xen-users mailing list
> Xen-users@xxxxxxxxxxxxxxxxxxx
> http://lists.xensource.com/xen-users
> 


_______________________________________________
Xen-users mailing list
Xen-users@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-users


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.