[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-devel] [RFC PATCH] Start PV guest faster
>>> Frediano Ziglio <frediano.ziglio@xxxxxxxxxx> 05/29/14 7:24 PM >>> >On Tue, 2014-05-20 at 10:30 +0100, Jan Beulich wrote: >> >>> On 20.05.14 at 09:26, <frediano.ziglio@xxxxxxxxxx> wrote: >> > It's a while I noticed that the time to start a large PV guest depends >> > on the amount of memory. For VMs with 64 or more GB of ram the time can >> > become quite significant (like 20 seconds). Digging around I found that >> > a lot of time is spend populating RAM (from a single hypercall made by >> > xenguest). >> >> Did you check whether - like noticed elsewhere - this is due to >> excessive hypercall preemption/restart? I.e. whether making >> the preemption checks less fine grained helps? >> > >Yes, you are right! > >Sorry for late reply, I got some time only now. Doing some tests with a >not so bug machine (3gb) and using strace to see amount of time spent > >| Xen preempt check | User allocation | Time all ioctls (sec) | >| yes | single pages | 0.262 | >| no | single pages | 0.0612 | >| yes | bunk of pages | 0.0325 | >| no | bunk of pages | 0.0280 | > >So yes, preemption check (I disable entirely for the tests!) is the main >factor. Of course disabling entirely is not the right solution. Are >there some way to understand how often to do, some sort of >computation/timing? If you look at other instances, it's mostly heuristic at this point. I suppose you'd want to make the preemption granularity slightly allocation order dependent, e.g. preempt every 64 pages allocated (unless, of course, the allocation order is even higher than that). Generally a time based approach (say every millisecond) might be reasonable too, but reading out the time on each iteration isn't without cost, so I'd recommend against this. Jan _______________________________________________ Xen-devel mailing list Xen-devel@xxxxxxxxxxxxx http://lists.xen.org/xen-devel
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |