[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-devel] [PATCH v3 6/9] viridian: add implementation of synthetic interrupt MSRs
> -----Original Message----- > From: Xen-devel [mailto:xen-devel-bounces@xxxxxxxxxxxxxxxxxxxx] On Behalf Of > Jan Beulich > Sent: 25 February 2019 14:12 > To: Paul Durrant <Paul.Durrant@xxxxxxxxxx> > Cc: Stefano Stabellini <sstabellini@xxxxxxxxxx>; Wei Liu > <wei.liu2@xxxxxxxxxx>; Konrad Rzeszutek Wilk > <konrad.wilk@xxxxxxxxxx>; George Dunlap <George.Dunlap@xxxxxxxxxx>; Andrew > Cooper > <Andrew.Cooper3@xxxxxxxxxx>; Ian Jackson <Ian.Jackson@xxxxxxxxxx>; Tim > (Xen.org) <tim@xxxxxxx>; Julien > Grall <julien.grall@xxxxxxx>; xen-devel <xen-devel@xxxxxxxxxxxxxxxxxxxx>; > Roger Pau Monne > <roger.pau@xxxxxxxxxx> > Subject: Re: [Xen-devel] [PATCH v3 6/9] viridian: add implementation of > synthetic interrupt MSRs > > >>> On 31.01.19 at 11:47, <paul.durrant@xxxxxxxxxx> wrote: > > @@ -105,6 +132,73 @@ int viridian_synic_wrmsr(struct vcpu *v, uint32_t idx, > > uint64_t val) > > viridian_map_guest_page(d, &v->arch.hvm.viridian->vp_assist); > > break; > > > > + case HV_X64_MSR_SCONTROL: > > + if ( !(viridian_feature_mask(d) & HVMPV_synic) ) > > + return X86EMUL_EXCEPTION; > > + > > + v->arch.hvm.viridian->scontrol = val; > > + break; > > + > > + case HV_X64_MSR_SVERSION: > > + return X86EMUL_EXCEPTION; > > + > > + case HV_X64_MSR_SIEFP: > > + if ( !(viridian_feature_mask(d) & HVMPV_synic) ) > > + return X86EMUL_EXCEPTION; > > + > > + v->arch.hvm.viridian->siefp = val; > > + break; > > + > > + case HV_X64_MSR_SIMP: > > + if ( !(viridian_feature_mask(d) & HVMPV_synic) ) > > + return X86EMUL_EXCEPTION; > > + > > + viridian_unmap_guest_page(&v->arch.hvm.viridian->simp); > > + v->arch.hvm.viridian->simp.msr.raw = val; > > + viridian_dump_guest_page(v, "SIMP", &v->arch.hvm.viridian->simp); > > + if ( v->arch.hvm.viridian->simp.msr.fields.enabled ) > > + viridian_map_guest_page(d, &v->arch.hvm.viridian->simp); > > + break; > > + > > + case HV_X64_MSR_EOM: > > + { > > + if ( !(viridian_feature_mask(d) & HVMPV_synic) ) > > + return X86EMUL_EXCEPTION; > > + > > + v->arch.hvm.viridian->msg_pending = 0; > > + break; > > + } > > + case HV_X64_MSR_SINT0 ... HV_X64_MSR_SINT15: > > Stray braces for the previous case, and a missing blank line between > both. > Ok. > > + { > > + unsigned int sintx = idx - HV_X64_MSR_SINT0; > > + uint8_t vector = v->arch.hvm.viridian->sint[sintx].fields.vector; > > + > > + if ( !(viridian_feature_mask(d) & HVMPV_synic) ) > > + return X86EMUL_EXCEPTION; > > + > > + /* > > + * Invalidate any previous mapping by setting an out-of-range > > + * index. > > + */ > > + v->arch.hvm.viridian->vector_to_sintx[vector] = > > + ARRAY_SIZE(v->arch.hvm.viridian->sint); > > + > > + v->arch.hvm.viridian->sint[sintx].raw = val; > > + > > + /* Vectors must be in the range 16-255 inclusive */ > > + vector = v->arch.hvm.viridian->sint[sintx].fields.vector; > > + if ( vector < 16 ) > > + return X86EMUL_EXCEPTION; > > The Viridian spec may surely specify architecturally inconsistent > behavior, but I'd like to double check that raising an exception > after having updated some state already is really intended here. > The spec is ambiguous as to whether a subsequent read should see the invalid value, but I agree it would be better to avoid changing state. > > + printk(XENLOG_G_INFO "%pv: VIRIDIAN SINT%u: vector: %x\n", v, > > sintx, > > + vector); > > + v->arch.hvm.viridian->vector_to_sintx[vector] = sintx; > > + > > + if ( v->arch.hvm.viridian->sint[sintx].fields.polling ) > > + clear_bit(sintx, &v->arch.hvm.viridian->msg_pending); > > + > > + break; > > + } > > default: > > Missing blank line above here again (and one more in the rdmsr code > below). Ok. > > > @@ -116,6 +210,8 @@ int viridian_synic_wrmsr(struct vcpu *v, uint32_t idx, > > uint64_t val) > > > > int viridian_synic_rdmsr(const struct vcpu *v, uint32_t idx, uint64_t *val) > > { > > + struct domain *d = v->domain; > > const? Ok. > > > +void viridian_synic_poll_messages(struct vcpu *v) > > const ? At the moment, since the function does nothing, yes. > > > +bool viridian_synic_is_auto_eoi_sint(struct vcpu *v, uint8_t vector) > > const Ok. > > > +{ > > + int sintx = v->arch.hvm.viridian->vector_to_sintx[vector]; > > I realize the array read from has uint8_t elements, but can this and ... > > > + if ( sintx >= ARRAY_SIZE(v->arch.hvm.viridian->sint) ) > > + return false; > > + > > + return v->arch.hvm.viridian->sint[sintx].fields.auto_eoi; > > +} > > + > > +void viridian_synic_ack_sint(struct vcpu *v, uint8_t vector) > > +{ > > + int sintx = v->arch.hvm.viridian->vector_to_sintx[vector]; > > ... this please still be unsigned int? Ok. > > Also for both functions I wonder whether their first parameters > wouldn't better be struct viridian_vcpu *. This would certainly help > readability here. I'd prefer outside callers to just pass struct vcpu. I think I'll put in a pre-cursor patch to use a stack variable to point at viridian_vcpu or viridian_domain in places where it makes sense, for the same of readability. > > > + if ( sintx < ARRAY_SIZE(v->arch.hvm.viridian->sint) ) > > + clear_bit(sintx, &v->arch.hvm.viridian->msg_pending); > > You also may want to use array_index_nospec() here and > array_access_nospec() above, despite there not being very big a > range to run past array bounds . Ok, I can do that. > > > --- a/xen/arch/x86/hvm/vlapic.c > > +++ b/xen/arch/x86/hvm/vlapic.c > > @@ -461,11 +461,15 @@ void vlapic_EOI_set(struct vlapic *vlapic) > > > > void vlapic_handle_EOI(struct vlapic *vlapic, u8 vector) > > { > > + struct vcpu *v = vlapic_vcpu(vlapic); > > struct domain *d = vlapic_domain(vlapic); > > > > if ( vlapic_test_vector(vector, &vlapic->regs->data[APIC_TMR]) ) > > vioapic_update_EOI(d, vector); > > > > + if ( has_viridian_synic(v->domain) ) > > Please use d here. And could this be "else if()"? > Ok, else if would work. > > @@ -1301,6 +1305,13 @@ int vlapic_has_pending_irq(struct vcpu *v) > > if ( !vlapic_enabled(vlapic) ) > > return -1; > > > > + /* > > + * Poll the viridian message queues before checking the IRR since > > + * a sythetic interrupt may be asserted during the poll. > > + */ > > + if ( has_viridian_synic(v->domain) ) > > + viridian_synic_poll_messages(v); > > + > > irr = vlapic_find_highest_irr(vlapic); > > if ( irr == -1 ) > > return -1; > > While architecturally IRR can indeed become set at any time, is it > acceptable to all of our other code for two successive invocations > to the function to potentially produce different results? I'm in > particular worried about {svm,vmx}_intr_assist(), and in their > context I also wonder whether you don't need a new > hvm_intsrc_* - it looks as if you use enough of the LAPIC to get > away without, but I'm not entirely certain. Well according to the spec the synic is supposed to be a superset of the local APIC so I'd rather stay combined. I guess I can add a latch so that the poll is not retried until after vlapic_ack_pending_irq() has been called. > > > --- a/xen/include/asm-x86/hvm/viridian.h > > +++ b/xen/include/asm-x86/hvm/viridian.h > > @@ -26,10 +26,30 @@ struct viridian_page > > void *ptr; > > }; > > > > +union viridian_sint_msr > > +{ > > + uint64_t raw; > > + struct > > + { > > + uint64_t vector:8; > > + uint64_t reserved_preserved1:8; > > + uint64_t mask:1; > > + uint64_t auto_eoi:1; > > + uint64_t polling:1; > > + uint64_t reserved_preserved2:45; > > + } fields; > > This being an internal header, does the inner struct really need > a field name? For consistency, yes. Again I'll see about adding a pre-cursor patch to blow the 'fields' name away in other cases, then I won't need it here. Paul > > Jan > > > _______________________________________________ > Xen-devel mailing list > Xen-devel@xxxxxxxxxxxxxxxxxxxx > https://lists.xenproject.org/mailman/listinfo/xen-devel _______________________________________________ Xen-devel mailing list Xen-devel@xxxxxxxxxxxxxxxxxxxx https://lists.xenproject.org/mailman/listinfo/xen-devel
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |