[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-devel] [PATCH v4 0/6] save/restore on Xen
On Fri, 20 Jan 2012, Jan Kiszka wrote: > On 2012-01-20 18:20, Stefano Stabellini wrote: > > Hi all, > > this is the fourth version of the Xen save/restore patch series. > > We have been discussing this issue for quite a while on #qemu and > > qemu-devel: > > > > > > http://marc.info/?l=qemu-devel&m=132346828427314&w=2 > > http://marc.info/?l=qemu-devel&m=132377734605464&w=2 > > > > > > A few different approaches were proposed to achieve the goal > > of a working save/restore with upstream Qemu on Xen, however after > > prototyping some of them I came up with yet another solution, that I > > think leads to the best results with the less amount of code > > duplications and ugliness. > > Far from saying that this patch series is an example of elegance and > > simplicity, but it is closer to acceptable anything else I have seen so > > far. > > > > What's new is that Qemu is going to keep track of its own physmap on > > xenstore, so that Xen can be fully aware of the changes Qemu makes to > > the guest's memory map at any time. > > This is all handled by Xen or Xen support in Qemu internally and can be > > used to solve our save/restore framebuffer problem. > > > >>From the Qemu common code POV, we still need to avoid saving the guest's > > ram when running on Xen, and we need to avoid resetting the videoram on > > restore (that is a benefit to the generic Qemu case too, because it > > saves few cpu cycles). > > For my understanding: Refraining from the memset is required as the > already restored vram would then be overwritten? Yep > Or what is the ordering > of init, RAM restore, and initial device reset now? RAM restore (done by Xen) physmap rebuild (done by xen_hvm_init in qemu) pc_init() qemu_system_reset() load_vmstate() _______________________________________________ Xen-devel mailing list Xen-devel@xxxxxxxxxxxxxxxxxxx http://lists.xensource.com/xen-devel
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |