[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] Improving domU restore time



On 25/05/2010 13:50, "Rafal Wojtczuk" <rafal@xxxxxxxxxxxxxxxxxxxxxx> wrote:

>> There is no guarantee that the memory will be zeroed.
> Interesting.
> For my education, could you explain who is responsible for clearing memory
> of a newborn domain ? Xend ? Could you point me to the relevant code
> fragments ?

New domains are not guaranteed to receive zeroed memory. The only guarantee
Xen provides is that when it frees memory for a *dead* domain, it will scrub
the contents before reallocation (it may not write zeroes however, in a
debug build of Xen for example!). Other memory pages the domain freeing the
pages must scrub them itself before freeing them back to Xen.

> It looks sensible to clear free memory in hypervisor context in its idle
> cycles; if non-temporal instructions (movnti) were used for this, it would
> not pollute caches, and it must be done anyway ?

Only for that one case (freeing pages of a dead domain). In that one case we
currently do it synchronously. But that is because it was better than my
previous crappy asynchronous scrubbing code. :-)

>>> b) xen-3.4.3/xc_restore reads data from savefile in 4k portions - so, one
>>> read syscall per page. Make it read in larger chunks. It looks it is fixed
>>> in
>>> xen-4.0.0, is this correct ?
>> 
>> It got changed a lot for Remus. I expect performance was on their mind.
>> Normally kernel's file readahead heuristic would get back most of the
>> performance of not reading in larger chunks.
> Yes, readahead would keep the disk request queue full, but I was just
> thinking of lowering the syscall overhead. 1e5 syscalls is a lot :)

Well the code looks like it batches now anyway. If it isn't, it would be
interesting to see if making batches would measurably improve performance.

 -- Keir

> [user@qubes ~]$ dd if=/dev/zero of=/dev/null bs=4k count=102400
> 102400+0 records in
> 102400+0 records out
> 419430400 bytes (419 MB) copied, 0.307211 s, 1.4 GB/s
> [user@qubes ~]$ dd if=/dev/zero of=/dev/null bs=4M count=100
> 100+0 records in
> 100+0 records out
> 419430400 bytes (419 MB) copied, 0.25347 s, 1.7 GB/s



_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.