[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH 4 of 5 V3] tools/libxl: Control network buffering in remus callbacks [and 1 more messages]



On Mon, 2013-11-04 at 09:17 -0600, Shriram Rajagopalan wrote:
> On Mon, Nov 4, 2013 at 6:12 AM, Ian Jackson
> <Ian.Jackson@xxxxxxxxxxxxx> wrote:
>         Shriram Rajagopalan writes ("Re: [PATCH 4 of 5 V3]
>         tools/libxl: Control network buffering in remus callbacks"):
>         > Nanosleep was more of my personal preference,
>         
>         I don't think that's a good enough reason for the churn, but
>         as I say
>         this really ought to be replaced with use of a timeout event.
>         
> 
> 
> 
> 
> Fair enough. My question is what is the overhead of setting up, firing
> and tearing down
> a timeout event using the event gen framework, if I wish to checkpoint
> the VM, say every 20ms ?
> If you happen to have the numbers off the top of your head, it would
> help. Or if you are sure that
> the system can easily handle this rate of events with very little
> overhead (<0.2ms per event)

Regardless of the answer here, would it make sense to do some/all of the
checkpoint processing in the helper subprocess anyway and only signal
the eventual failover up to the libxl process?

This async op is potentially quite long running I think compared to a
normal one i.e. if the guest doesn't die it is expected that the ao
lives "forever". Since the associated gc's persist until the ao ends
this might end up accumulating lots of allocations? Ian had a similar
concern about Roger's hotplug daemon series and suggested creating a per
iteration gc or something.

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.