[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] Improving domU restore time



On Tue, May 25, 2010 at 12:50:40PM +0100, Keir Fraser wrote:
> On 25/05/2010 11:35, "Rafal Wojtczuk" <rafal@xxxxxxxxxxxxxxxxxxxxxx> wrote:
> 
> > a) Is it correct that when xc_restore runs, the target domain memory is
> > already
> > zeroed (because hypervisor scrubs free memory, before it is assigned to a
> > new domain)
> 
> There is no guarantee that the memory will be zeroed.
Interesting.
For my education, could you explain who is responsible for clearing memory
of a newborn domain ? Xend ? Could you point me to the relevant code
fragments ?
It looks sensible to clear free memory in hypervisor context in its idle 
cycles; if non-temporal instructions (movnti) were used for this, it would 
not pollute caches, and it must be done anyway ?

> > b) xen-3.4.3/xc_restore reads data from savefile in 4k portions - so, one
> > read syscall per page. Make it read in larger chunks. It looks it is fixed 
> > in
> > xen-4.0.0, is this correct ?
> 
> It got changed a lot for Remus. I expect performance was on their mind.
> Normally kernel's file readahead heuristic would get back most of the
> performance of not reading in larger chunks.
Yes, readahead would keep the disk request queue full, but I was just
thinking of lowering the syscall overhead. 1e5 syscalls is a lot :)
[user@qubes ~]$ dd if=/dev/zero of=/dev/null bs=4k count=102400
102400+0 records in
102400+0 records out
419430400 bytes (419 MB) copied, 0.307211 s, 1.4 GB/s
[user@qubes ~]$ dd if=/dev/zero of=/dev/null bs=4M count=100
100+0 records in
100+0 records out
419430400 bytes (419 MB) copied, 0.25347 s, 1.7 GB/s

RW

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.