[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-devel] [PATCH v5 12/13] pvh/acpi: Save ACPI registers for PVH guests
On 12/20/2016 08:57 AM, Jan Beulich wrote: >>>> On 17.12.16 at 00:18, <boris.ostrovsky@xxxxxxxxxx> wrote: >> --- a/xen/arch/x86/hvm/pmtimer.c >> +++ b/xen/arch/x86/hvm/pmtimer.c >> @@ -257,7 +257,11 @@ static int acpi_save(struct domain *d, >> hvm_domain_context_t *h) >> int rc; >> >> if ( !has_vpm(d) ) >> + { >> + if ( !has_acpi_dm_ff(d) ) >> + return hvm_save_entry(PMTIMER, 0, h, acpi); >> return 0; >> + } >> >> spin_lock(&s->lock); >> >> @@ -286,7 +290,11 @@ static int acpi_load(struct domain *d, >> hvm_domain_context_t *h) >> PMTState *s = &d->arch.hvm_domain.pl_time->vpmt; >> >> if ( !has_vpm(d) ) >> + { >> + if ( !has_acpi_dm_ff(d) ) >> + return hvm_load_entry(PMTIMER, h, acpi); >> return -ENODEV; >> + } >> >> spin_lock(&s->lock); > Seeing this I first of all wonder - would there be any harm in simply > having PVH take (almost) the same route as HVM here? In particular > there's a pmt_update_sci() call, an equivalent of which would seem > to be needed for PVH too. It probably is harmless but not very useful too since we'd be saving PMTState which is not in use by PVH. As far as pmt_update_sci() --- the only case when we have an SCI generated is CPU map update and the interrupt is made pending to the guest immediately so if we try to resend it during restore won't we be sending it twice? > > Which in turn gets me to wonder whether some of the code which > is already there couldn't be re-used (handle_evt_io() for example). My plan is to merge these routines (to some extent) when we make HVM guests use the same CPU hotplug code as PVH (i.e. stop using QEMU for that). > And then, seeing the locking here - don't you need some locking > in the earlier patches too, both to serialize accesses from multiple > guest vCPU-s and to arbitrate between Dom0 and the guest? Yes, that was missing. -boris _______________________________________________ Xen-devel mailing list Xen-devel@xxxxxxxxxxxxx https://lists.xen.org/xen-devel
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |