[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-devel] Design session report: Live-Updating Xen
On 17/07/2019 14:02, Jan Beulich wrote: > On 17.07.2019 13:26, Andrew Cooper wrote: >> On 17/07/2019 08:09, Jan Beulich wrote: >>> On 17.07.2019 01:51, Andrew Cooper wrote: >>>> On 15/07/2019 19:57, Foerster, Leonard wrote: >>>>> * dom0less: bootstrap domains without the involvement of dom0 >>>>> -> this might come in handy to at least setup and continue dom0 >>>>> on target xen >>>>> -> If we have this this might also enable us to de-serialize >>>>> the state for >>>>> other guest-domains in xen and not have to wait for >>>>> dom0 to do this >>>> Reconstruction of dom0 is something which Xen will definitely need to >>>> do. With the memory still in place, its just a fairly small of register >>>> state which needs restoring. >>>> >>>> That said, reconstruction of the typerefs will be an issue. Walking >>>> over a fully populated L4 tree can (in theory) take minutes, and its not >>>> safe to just start executing without reconstruction. >>>> >>>> Depending on how bad it is in practice, one option might be to do a >>>> demand validate of %rip and %rsp, along with a hybrid shadow mode which >>>> turns faults into typerefs, which would allow the gross cost of >>>> revalidation to be amortised while the vcpus were executing. We would >>>> definitely want some kind of logic to aggressively typeref outstanding >>>> pagetables so the shadow mode could be turned off. >>> Neither walking the page table trees nor and on-demand re-creation can >>> possibly work, as pointed out during (partly informal) discussion: At >>> the very least the allocated and pinned states of pages can only be >>> transferred. >> Pinned state exists in the current migrate stream. Allocated does not - >> it is an internal detail of how Xen handles the memory. >> >> But yes - this observation means that we can't simply walk the guest >> pagetables. >> >>> Hence we seem to have come to agreement that struct >>> page_info instances have to be transformed (in place if possible, i.e. >>> when the sizes match, otherwise by copying). >> -10 to this idea, if it can possibly be avoided. In this case, it >> definitely can be avoided. >> >> We do not want to be grovelling around in the old Xen's datastructures, >> because that adds a binary A=>B translation which is >> per-old-version-of-xen, meaning that you need a custom build of each >> target Xen which depends on the currently-running Xen, or have to >> maintain a matrix of old versions which will be dependent on the local >> changes, and therefore not suitable for upstream. > Now the question is what alternative you would suggest. By you > saying "the pinned state lives in the migration stream", I assume > you mean to imply that Dom0 state should be handed from old to > new Xen via such a stream (minus raw data page contents)? Yes, and this in explicitly identified in the bullet point saying "We do only rely on domain state and no internal xen state". In practice, it is going to be far more efficient to have Xen serialise/deserialise the domain register state etc, than to bounce it via hypercalls. By the time you're doing that in Xen, adding dom0 as well is trivial. > >>>>> -> We might have to go and re-inject certain interrupts >>>> What hardware are you targeting here? IvyBridge and later has a posted >>>> interrupt descriptor which can accumulate pending interrupts (at least >>>> manually), and newer versions (Broadwell?) can accumulate interrupts >>>> directly from hardware. >>> For HVM/PVH perhaps that's good enough. What about PV though? >> What about PV? >> >> The in-guest evtchn data structure will accumulate events just like a >> posted interrupt descriptor. Real interrupts will queue in the LAPIC >> during the transition period. > Yes, that'll work as long as interrupts remain active from Xen's POV. > But if there's concern about a blackout period for HVM/PVH, then > surely there would also be such for PV. The only fix for that is to reduce the length of the blackout period. We can't magically inject interrupts half way through the xen-to-xen transition, because we can't run vcpus at that point in time. > >>>>> A key cornerstone for Live-update is guest transparent live migration >>>>> -> This means we are using a well defined ABI for saving/restoring >>>>> domain state >>>>> -> We do only rely on domain state and no internal xen state >>>> Absolutely. One issue I discussed with David a while ago is that even >>>> across an upgrade of Xen, the format of the EPT/NPT pagetables might >>>> change, at least in terms of the layout of software bits. (Especially >>>> for EPT where we slowly lose software bits to new hardware features we >>>> wish to use.) >>> Right, and therefore a similar transformation like for struct page_info >>> may be unavoidable here too. >> None of that lives in the current migrate stream. Again - it is >> internal details, so is not something which is appropriate to be >> inspected by the target Xen. >> >>> Re-using large data structures (or arrays thereof) may also turn out >>> useful in terms of latency until the new Xen actually becomes ready to >>> resume. >> When it comes to optimising the latency, there is a fair amount we might >> be able to do ahead of the critical region, but I still think this would >> be better done in terms of a "clean start" in the new Xen to reduce >> binary dependences. > Latency actually is only one aspect (albeit the larger the host, the more > relevant it is). Sufficient memory to have both old and new copies of the > data structures in place, plus the migration stream, is another. This > would especially become relevant when even DomU-s were to remain in > memory, rather than getting saved/restored. But we're still talking about something which is on a multi-MB scale, rather than multi-GB scale. Xen itself is tiny. Sure there are overheads from the heap management and pagetables etc, but the the overwhelming majority of used memory is guest RAM which is staying in place. ~Andrew _______________________________________________ Xen-devel mailing list Xen-devel@xxxxxxxxxxxxxxxxxxxx https://lists.xenproject.org/mailman/listinfo/xen-devel
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |