[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-devel] Design session report: Live-Updating Xen
On 17/07/2019 08:09, Jan Beulich wrote: > On 17.07.2019 01:51, Andrew Cooper wrote: >> On 15/07/2019 19:57, Foerster, Leonard wrote: >>> * dom0less: bootstrap domains without the involvement of dom0 >>> -> this might come in handy to at least setup and continue dom0 >>> on target xen >>> -> If we have this this might also enable us to de-serialize >>> the state for >>> other guest-domains in xen and not have to wait for >>> dom0 to do this >> Reconstruction of dom0 is something which Xen will definitely need to >> do. With the memory still in place, its just a fairly small of register >> state which needs restoring. >> >> That said, reconstruction of the typerefs will be an issue. Walking >> over a fully populated L4 tree can (in theory) take minutes, and its not >> safe to just start executing without reconstruction. >> >> Depending on how bad it is in practice, one option might be to do a >> demand validate of %rip and %rsp, along with a hybrid shadow mode which >> turns faults into typerefs, which would allow the gross cost of >> revalidation to be amortised while the vcpus were executing. We would >> definitely want some kind of logic to aggressively typeref outstanding >> pagetables so the shadow mode could be turned off. > Neither walking the page table trees nor and on-demand re-creation can > possibly work, as pointed out during (partly informal) discussion: At > the very least the allocated and pinned states of pages can only be > transferred. Pinned state exists in the current migrate stream. Allocated does not - it is an internal detail of how Xen handles the memory. But yes - this observation means that we can't simply walk the guest pagetables. > Hence we seem to have come to agreement that struct > page_info instances have to be transformed (in place if possible, i.e. > when the sizes match, otherwise by copying). -10 to this idea, if it can possibly be avoided. In this case, it definitely can be avoided. We do not want to be grovelling around in the old Xen's datastructures, because that adds a binary A=>B translation which is per-old-version-of-xen, meaning that you need a custom build of each target Xen which depends on the currently-running Xen, or have to maintain a matrix of old versions which will be dependent on the local changes, and therefore not suitable for upstream. >>> -> We might have to go and re-inject certain interrupts >> What hardware are you targeting here? IvyBridge and later has a posted >> interrupt descriptor which can accumulate pending interrupts (at least >> manually), and newer versions (Broadwell?) can accumulate interrupts >> directly from hardware. > For HVM/PVH perhaps that's good enough. What about PV though? What about PV? The in-guest evtchn data structure will accumulate events just like a posted interrupt descriptor. Real interrupts will queue in the LAPIC during the transition period. We obviously can't let interrupts be dropped, but there also shouldn't be any need to re-inject any. >>> A key cornerstone for Live-update is guest transparent live migration >>> -> This means we are using a well defined ABI for saving/restoring >>> domain state >>> -> We do only rely on domain state and no internal xen state >> Absolutely. One issue I discussed with David a while ago is that even >> across an upgrade of Xen, the format of the EPT/NPT pagetables might >> change, at least in terms of the layout of software bits. (Especially >> for EPT where we slowly lose software bits to new hardware features we >> wish to use.) > Right, and therefore a similar transformation like for struct page_info > may be unavoidable here too. None of that lives in the current migrate stream. Again - it is internal details, so is not something which is appropriate to be inspected by the target Xen. > Re-using large data structures (or arrays thereof) may also turn out > useful in terms of latency until the new Xen actually becomes ready to > resume. When it comes to optimising the latency, there is a fair amount we might be able to do ahead of the critical region, but I still think this would be better done in terms of a "clean start" in the new Xen to reduce binary dependences. ~Andrew _______________________________________________ Xen-devel mailing list Xen-devel@xxxxxxxxxxxxxxxxxxxx https://lists.xenproject.org/mailman/listinfo/xen-devel
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |