[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-devel] [PATCH v2 1/8] viridian: add init hooks
> -----Original Message----- > From: Paul Durrant [mailto:paul.durrant@xxxxxxxxxx] > Sent: 08 January 2019 15:18 > To: xen-devel@xxxxxxxxxxxxxxxxxxxx > Cc: Paul Durrant <Paul.Durrant@xxxxxxxxxx>; Wei Liu <wei.liu2@xxxxxxxxxx>; > Jan Beulich <jbeulich@xxxxxxxx>; Andrew Cooper > <Andrew.Cooper3@xxxxxxxxxx>; Roger Pau Monne <roger.pau@xxxxxxxxxx> > Subject: [PATCH v2 1/8] viridian: add init hooks > > This patch adds domain and vcpu init hooks for viridian features. The init > hooks do not yet do anything; the functionality will be added to by > subsequent patches. > > NOTE: This patch also removes the call from the domain deinit function to > the vcpu deinit function, as this is not necessary. > > Signed-off-by: Paul Durrant <paul.durrant@xxxxxxxxxx> > Reviewed-by: Wei Liu <wei.liu2@xxxxxxxxxx> > --- > Cc: Jan Beulich <jbeulich@xxxxxxxx> > Cc: Andrew Cooper <andrew.cooper3@xxxxxxxxxx> > Cc: "Roger Pau Monné" <roger.pau@xxxxxxxxxx> > > v2: > - Remove call from domain deinit to vcpu deinit Actually, further testing has shown this to be necessary so I'm going to re-instate it in v3. I agree that it should not be necessary (because it implies a generic HVM teardown problem) but this is not the patch to be fixing such issues in. Paul > --- > xen/arch/x86/hvm/hvm.c | 14 +++++++++++++- > xen/arch/x86/hvm/viridian/viridian.c | 14 ++++++++++---- > xen/include/asm-x86/hvm/viridian.h | 3 +++ > 3 files changed, 26 insertions(+), 5 deletions(-) > > diff --git a/xen/arch/x86/hvm/hvm.c b/xen/arch/x86/hvm/hvm.c > index 401c4a9312..9967169af6 100644 > --- a/xen/arch/x86/hvm/hvm.c > +++ b/xen/arch/x86/hvm/hvm.c > @@ -665,12 +665,18 @@ int hvm_domain_initialise(struct domain *d) > if ( hvm_tsc_scaling_supported ) > d->arch.hvm.tsc_scaling_ratio = hvm_default_tsc_scaling_ratio; > > + rc = viridian_domain_init(d); > + if ( rc ) > + goto fail2; > + > rc = hvm_funcs.domain_initialise(d); > if ( rc != 0 ) > - goto fail2; > + goto fail3; > > return 0; > > + fail3: > + viridian_domain_deinit(d); > fail2: > rtc_deinit(d); > stdvga_deinit(d); > @@ -1539,6 +1545,10 @@ int hvm_vcpu_initialise(struct vcpu *v) > if ( rc != 0 ) > goto fail6; > > + rc = viridian_vcpu_init(v); > + if ( rc ) > + goto fail7; > + > if ( v->vcpu_id == 0 ) > { > /* NB. All these really belong in hvm_domain_initialise(). */ > @@ -1551,6 +1561,8 @@ int hvm_vcpu_initialise(struct vcpu *v) > > return 0; > > + fail7: > + hvm_all_ioreq_servers_remove_vcpu(d, v); > fail6: > nestedhvm_vcpu_destroy(v); > fail5: > diff --git a/xen/arch/x86/hvm/viridian/viridian.c > b/xen/arch/x86/hvm/viridian/viridian.c > index c78b2918d9..65afa049d9 100644 > --- a/xen/arch/x86/hvm/viridian/viridian.c > +++ b/xen/arch/x86/hvm/viridian/viridian.c > @@ -417,6 +417,16 @@ int guest_rdmsr_viridian(const struct vcpu *v, > uint32_t idx, uint64_t *val) > return X86EMUL_OKAY; > } > > +int viridian_vcpu_init(struct vcpu *v) > +{ > + return 0; > +} > + > +int viridian_domain_init(struct domain *d) > +{ > + return 0; > +} > + > void viridian_vcpu_deinit(struct vcpu *v) > { > viridian_synic_wrmsr(v, HV_X64_MSR_VP_ASSIST_PAGE, 0); > @@ -424,10 +434,6 @@ void viridian_vcpu_deinit(struct vcpu *v) > > void viridian_domain_deinit(struct domain *d) > { > - struct vcpu *v; > - > - for_each_vcpu ( d, v ) > - viridian_vcpu_deinit(v); > } > > static DEFINE_PER_CPU(cpumask_t, ipi_cpumask); > diff --git a/xen/include/asm-x86/hvm/viridian.h b/xen/include/asm- > x86/hvm/viridian.h > index ec5ef8d3f9..f072838955 100644 > --- a/xen/include/asm-x86/hvm/viridian.h > +++ b/xen/include/asm-x86/hvm/viridian.h > @@ -80,6 +80,9 @@ viridian_hypercall(struct cpu_user_regs *regs); > void viridian_time_ref_count_freeze(struct domain *d); > void viridian_time_ref_count_thaw(struct domain *d); > > +int viridian_vcpu_init(struct vcpu *v); > +int viridian_domain_init(struct domain *d); > + > void viridian_vcpu_deinit(struct vcpu *v); > void viridian_domain_deinit(struct domain *d); > > -- > 2.20.1.2.gb21ebb671 _______________________________________________ Xen-devel mailing list Xen-devel@xxxxxxxxxxxxxxxxxxxx https://lists.xenproject.org/mailman/listinfo/xen-devel
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |