[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-API] XCP 1.6 VM.get_all_records API Call hanging

  • To: 'Dave Avent' <Dave.Avent@xxxxxxxxxxxxx>, "xen-api@xxxxxxxxxxxxx" <xen-api@xxxxxxxxxxxxx>
  • From: Dave Scott <Dave.Scott@xxxxxxxxxxxxx>
  • Date: Thu, 3 Jan 2013 12:05:58 +0000
  • Accept-language: en-US
  • Acceptlanguage: en-US
  • Delivery-date: Thu, 03 Jan 2013 12:06:13 +0000
  • List-id: User and development list for XCP and XAPI <xen-api.lists.xen.org>
  • Thread-index: Ac3po3asoRRPEwLFQ9K6evrbyDMfEgABo3Ug
  • Thread-topic: XCP 1.6 VM.get_all_records API Call hanging


That's very strange. Could it be the size of the "VM.get_all_records" response 
is too big to be sent properly? Do other APIs which generate less traffic (e.g. 
"Pool.get_all_records") always work, or do they sometimes fail too?

Perhaps you've got a thread or fd leak? If you run "top" and look at the 
virtual size of the "xapi" process, is it very large? Maybe there are too many 
threads and it's blocking? If it looks very large (more than 1 or 2 GB) then 
I'd be interested to see how many file descriptors are open (look in 
/proc/<pid>/fd -- but make sure you select the child pid since the parent is a 
much smaller watchdog process). It might be that some client(s) are keeping 
connections open for longer than expected and currently xapi uses a 
thread-per-connection model.

Does restarting the xapi process cause the problem to go away (even if 


> -----Original Message-----
> From: xen-api-bounces@xxxxxxxxxxxxx [mailto:xen-api-
> bounces@xxxxxxxxxxxxx] On Behalf Of Dave Avent
> Sent: 03 January 2013 11:14 AM
> To: xen-api@xxxxxxxxxxxxx
> Subject: [Xen-API] XCP 1.6 VM.get_all_records API Call hanging
> All,
> I have been playing with Xen Cloud Platform 1.6 in the lab and have been
> noticing some strange behaviour when using the API to drive it. Using
> XenCenter worked some of the time but then appeared to freeze, not
> refresh and then on re-connecting get stuck on "Synchronising". I put this
> down to the client software and switched instead to writing my own client
> interface using Ruby and the FOG libraries. This worked well but I started
> seeing the same lock-ups. After digging about I realised that it always locks
> up at the same point namely the VM.get_all_records call.
> The call is received as the xensource.log shows:
> xapi: [debug|xcp-test2|105923 INET|dispatch:VM.get_all_records
> D:7f14fc2bea45|api_readonly] VM.get_all_records
> But nothing is received and the clients TCP connection times out.
> This problem happens for well over 50% of the calls made to the API so I am
> interested to know if anyone else is having this problem?
> Regards,
> Dave--
> Future Publishing Limited (registered company number 2008885) is a wholly
> owned subsidiary of Future plc (registered company number 3757874), both
> of which are incorporated in England and Wales and share the same
> registered address at Beauford Court, 30 Monmouth Street, Bath BA1 2BW.
> This email and any files transmitted with it are confidential and intended
> solely for the use of the individual or entity to which they are addressed. If
> you have received this email in error please reply to this email and then
> delete it. Please note that any views or opinions presented in this email are
> solely those of the author and do not necessarily represent those of Future.
> The recipient should check this email and any attachments for the presence
> of viruses. Future accepts no liability for any damage caused by any virus
> transmitted by this email.
> Future may regularly and randomly monitor outgoing and incoming emails
> and other telecommunications on its email and telecommunications
> systems. By replying to this email you give your consent to such monitoring.
> *****
> Save resources: think before you print.
> _______________________________________________
> Xen-api mailing list
> Xen-api@xxxxxxxxxxxxx
> http://lists.xen.org/cgi-bin/mailman/listinfo/xen-api

Xen-api mailing list



Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.