[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-API] [Squeezd] Disabling some balancing features



Mike,

> ... seems to show xenopsd trying to rebalance memory,

While migration general balance memory rpc is called before whole
process (in pool_migrate_nolock funtion). Also while migration
reserving memory involves the same algorithm described in squeeze.ml
Squeezer module via calling Squeezed.reserve_memory ->
Squeeze_xen.free_memory -> Squeeze.change_host_free_memory -> loop
with Squeezer.one_iteration inside its body.

Happy hacking!

On Tue, Oct 23, 2012 at 5:38 PM, Mike McClurg <mike.mcclurg@xxxxxxxxxx> wrote:
> On 23/10/12 14:15, Dave Scott wrote:
>> In case it's useful: the most recent versions of xapi (found in XenServer 
>> 6.1 and should be in XCP 1.6) can run without squeezed. So you can
>>
>> service squeezed stop
>>
>> and then when you try to start a VM, there won't be any squeezing at all. 
>> Your new daemon could do whatever it likes to manage the VM balloon targets 
>> independently of xapi.
>>
>> Does that help at all?
>>
>> Cheers,
>> Dave
>
> Hi Dave,
>
> I just tried this on XCP 1.6. I stopped squeezed, and then restarted
> xenopsd and xapi (for luck), and then tried a localhost migrate. I got
> the error:
>
> The server failed to handle your request, due to an internal error.  The
> given message may give details useful for debugging the problem.
> message: Xenops_interface.Internal_error("Unix.Unix_error(63,
> \"connect\", \"\")")
>
> xensource.log (see below) seems to show xenopsd trying to rebalance
> memory, even though there is plenty of memory free. Do you know what's
> going on here?
>
> Mike
>
>
> [xensource.log]
> Oct 23 14:26:30 xcp-boston-53341-1 /opt/xensource/libexec/xenopsd:
> [debug|xcp-boston-53341-1|7|VM.pool_migrate R:944931f7933c|xenops] VM =
> dd2b8958-15aa-1988-c7fa-e7b8c3a
> 28eb2; domid = 3; set_memory_dynamic_range min = 262144; max = 262144
> Oct 23 14:26:30 xcp-boston-53341-1 /opt/xensource/libexec/xenopsd:
> [debug|xcp-boston-53341-1|7|VM.pool_migrate R:944931f7933c|xenops]
> rebalance_memory
> Oct 23 14:26:30 xcp-boston-53341-1 /opt/xensource/libexec/xenopsd:
> [debug|xcp-boston-53341-1|7|VM.pool_migrate R:944931f7933c|mscgen]
> xenops=>squeezed [label="balance_mem
> ory"];
> Oct 23 14:26:30 xcp-boston-53341-1 xenopsd: [
> info|xcp-boston-53341-1|7|VM.pool_migrate R:944931f7933c|xenops] Caught
> Unix.Unix_error(63, "connect", "") executing ["VM_mi
> grate", ["dd2b8958-15aa-1988-c7fa-e7b8c3a28eb2", {}, {},
> "http:\/\/10.80.238.191\/services\/xenops?session_id=OpaqueRef:7fc22b4d-f70b-25dc-dca3-b834b7eb5e5d"]]:
> triggerin
> g cleanup actions
> Oct 23 14:26:30 xcp-boston-53341-1 xenopsd:
> [debug|xcp-boston-53341-1|7|VM.pool_migrate R:944931f7933c|xenops] Task
> 11 reference VM.pool_migrate R:944931f7933c: ["VM_chec
> k_state", "dd2b8958-15aa-1988-c7fa-e7b8c3a28eb2"]
> Oct 23 14:26:30 xcp-boston-53341-1 xenopsd:
> [debug|xcp-boston-53341-1|7|VM.pool_migrate R:944931f7933c|xenops] VM
> dd2b8958-15aa-1988-c7fa-e7b8c3a28eb2 is not requesting a
> ny attention
> Oct 23 14:26:30 xcp-boston-53341-1 xenopsd:
> [debug|xcp-boston-53341-1|7|VM.pool_migrate R:944931f7933c|xenops]
> VM_DB.signal dd2b8958-15aa-1988-c7fa-e7b8c3a28eb2
> Oct 23 14:26:30 xcp-boston-53341-1 xenopsd:
> [debug|xcp-boston-53341-1|7|VM.pool_migrate R:944931f7933c|xenops] Task
> 11 completed; duration = 0
> Oct 23 14:26:30 xcp-boston-53341-1 xenopsd:
> [debug|xcp-boston-53341-1|7|VM.pool_migrate R:944931f7933c|xenops] Task
> 10 failed; exception = ["Internal_error", "Unix.Unix_e
> rror(63, \"connect\", \"\")"]
> Oct 23 14:26:30 xcp-boston-53341-1 xenopsd:
> [debug|xcp-boston-53341-1|7|VM.pool_migrate R:944931f7933c|xenops]
> Oct 23 14:26:30 xcp-boston-53341-1 xenopsd:
> [debug|xcp-boston-53341-1|7||xenops] TASK.signal 10 = ["Failed",
> ["Internal_error", "Unix.Unix_error(63, \"connect\", \"\")"]]
>
>

_______________________________________________
Xen-api mailing list
Xen-api@xxxxxxxxxxxxx
http://lists.xen.org/cgi-bin/mailman/listinfo/xen-api


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.