[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH RFC XEN v1 09/14] xen: arm: Save and restore GIC state.



On Wed, 2015-12-16 at 18:30 +0000, Stefano Stabellini wrote:
> On Wed, 9 Dec 2015, Ian Campbell wrote:
> > Currently only GICv2 support is implemented.
> > 
> > Given the differing architectural state between the GICv2 and v3 I
> > ended up with separate save records. I'm not sure if this is the best
> > plan. I have also split the state into GICD (per domain) and GICC (per
> > vcpu, although also including banked GICD state).
> 
> I think it makes sense to have a per-vcpu and a per-domain save/restore
> function pair.

I agree. The main question I was asking above was whether there should a
separate struct and pair of functions for each version of the gic. At the
time I wasn't so sure, but having slept on it a bit I think it is best this
way.

> > +
> > +ÂÂÂÂctxt.ctlr = d->arch.vgic.ctlr;
> 
> Do we need to save/restore vgic.dbase?

Yes, in fact we should save everything which comes from public/arch-arm.h
GUEST_* #defines and confirm on restore that they match the hardcoded
values in the new Xen (and if they ever change we'll need compat support).
+
> > +ÂÂÂÂÂÂÂÂ/* Save PPI and SGI states (per-CPU) */
> > +ÂÂÂÂÂÂÂÂspin_lock(&v->arch.vgic.lock);
> 
> vgic_lock

Ah, yes, this function was originally in save.c so didn't have access, but
now it is in the right place I should use all the correct functions.

> Is the domain already paused at this point? If so, maybe we could get
> away without locking?

Maybe we could think about it hard enough to rationalise that, but it seems
safer and easier to just follow the locking rules regardless.

> > +int vgic_irq_save_core(struct vcpu *v, unsigned irq, struct hvm_hw_irq
> > *s)
> > +{
> > +ÂÂÂÂstruct pending_irq *p = irq_to_pending(v, irq);
> > +ÂÂÂÂstruct vgic_irq_rank *rank = vgic_rank_irq(v, p->irq);
> > +
> > +ÂÂÂÂconst bool enabled = test_bit(GIC_IRQ_GUEST_ENABLED, &p->status);
> > +ÂÂÂÂconst bool queued = test_bit(GIC_IRQ_GUEST_QUEUED, &p->status);
> > +ÂÂÂÂconst bool visible = test_bit(GIC_IRQ_GUEST_VISIBLE, &p->status);
> > +
> > +ÂÂÂÂspin_lock(&rank->lock);
> 
> vgic_rank_lock. Maybe we need to take the vgic lock because there might
> be incoming hardware interrupts for the domain but I don't think we need
> the rank lock, do we?

As above I think it is simpler and safer to just follow the usual rules
rather than to special case things.
>Â
> > +ÂÂÂÂÂÂÂÂif ( lr.state & GICH_LR_PENDING )
> > +ÂÂÂÂÂÂÂÂÂÂÂÂs->state |= HVM_HW_IRQ_STATE_PENDING;
> 
> This is unnecessary: given that the LRs are updated on hypervisor entry,
> you can be sure at this point they are up to date.ÂÂIf an irq is
> GIC_IRQ_GUEST_VISIBLE, then it means that is PENDING or ACTIVE (or
> PENDING and ACTIVE). Given that we have mandated that there cannot be
> any ACTIVE interrupts, then we could skip all this and just
> 
> if ( visible )
> ÂÂÂÂs->state |= HVM_HW_IRQ_STATE_PENDING;

Ah, the update on entry was the bit I had missed here which allows this.
I'll replace the logic as you suggest and add a comment.

> 
> 
> > +ÂÂÂÂÂÂÂÂ/*
> > +ÂÂÂÂÂÂÂÂÂ* HVM_HW_IRQ_STATE_ACTIVE: We currently do not handle
> > save/restore
> > +ÂÂÂÂÂÂÂÂÂ* with active interrupts.
> > +ÂÂÂÂÂÂÂÂÂ*/
> > +ÂÂÂÂÂÂÂÂif ( lr.state & GICH_LR_ACTIVE )
> > +ÂÂÂÂÂÂÂÂ{
> > +ÂÂÂÂÂÂÂÂÂÂÂÂdprintk(XENLOG_G_ERR,
> > +ÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂ"HVM%d save: Cannot save active local IRQ%u\n",
> > +ÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂÂv->domain->domain_id, irq);
> > +ÂÂÂÂÂÂÂÂÂÂÂÂreturn -EINVAL;
> > +ÂÂÂÂÂÂÂÂ}
> > +ÂÂÂÂ}
> > +
> > +ÂÂÂÂ/*
> > +ÂÂÂÂÂ* HVM_HW_IRQ_STATE_EDGE: we implement ICFG[0] and [1] as RAZ/WI.
> > +ÂÂÂÂÂ*/
> > +
> > +ÂÂÂÂspin_unlock(&rank->lock);
> > +
> > +ÂÂÂÂreturn 0;
> > +}
> 
> We need to save/restore the target vcpu for each virq.

We only support PPIs and SGIs here, and ITARGETSR and IROUTERR aren't
writeable for those. If/when PPIs are added (which would be if/when we add
HVM style guests) we would really need to do so.

However, for the sake of ABI compatibility in the future, I think adding
the fields to the structs and initialising them 1:1 with the vcpus and
checking that on restore does make sense, so I'll do that.
> 
[...]
> > +ÂÂÂÂASSERT(s->irq < 32); /* SPI not supported yet */
> > +
> > +ÂÂÂÂrank->priority[s->irq % 32] = s->priority;
> 
> we need to restore the target vcpu

We just need to check it is 1:1 since it can't be an SPI.

> 
> 
> > +ÂÂÂÂif ( s->state & HVM_HW_IRQ_STATE_ENABLED )
> > +ÂÂÂÂÂÂÂÂset_bit(GIC_IRQ_GUEST_ENABLED, &p->status);
> > +
> > +ÂÂÂÂ/* If the IRQ was either PENDING or ACTIVE, then QUEUE it to be
> > reinjected. */
> > +ÂÂÂÂif ( s->state & (HVM_HW_IRQ_STATE_PENDING|HVM_HW_IRQ_STATE_ACTIVE)
> > )
> > +ÂÂÂÂ{
> > +ÂÂÂÂÂÂÂÂreraise = 1;
> > +ÂÂÂÂÂÂÂÂset_bit(GIC_IRQ_GUEST_QUEUED, &p->status);
> > +ÂÂÂÂ}
> > +
> > +ÂÂÂÂ/* This will check for enabled etc */
> > +ÂÂÂÂif ( reraise ) /* v != current, so will add to lr_pending */
> 
> You don't need reraise: just add the call to gic_raise_guest_irq under
> the previous if statement.

This was a vestige of there being other reasons to reraise which turned out
not to be needed, I'll move.

> +ÂÂÂÂ{
> > +ÂÂÂÂÂÂÂÂ/*
> > +ÂÂÂÂÂÂÂÂÂ* This uses the current IPRIORITYR, which may differ from
> > +ÂÂÂÂÂÂÂÂÂ* when the IRQ was actually made pending. h/w spec probably
> > +ÂÂÂÂÂÂÂÂÂ* allows this, XXXX check
> > +ÂÂÂÂÂÂÂÂÂ*/
> 
> From this VM point of view the IPRIORITYR is s->priority. I would remove
> the comment.

If you have an IRQ<A> with priority 55 which fires and then while it is
pending the priority is changed to 75 then I'm not sure what impact that
has on the priority which the hardware considers the already pending
interrupt to have.

To make it more complex consider if there was a second interrupt IRQ<B>
with priority 65, if the handler for that was the one changed IRQ<A>'s
priority should it expect to get immediately preempted by IRQ<A>

The comment is there because I'm not sure what behaviour the spec requires.

In terms of our code without the save restore I think the pending IRQ<A>
would remain at priority 55 and the guest would get IRQ<A> when IRQ<B>
EOIs. Whereas upon restore we would reraise it with priority 75 and IRQ<A>
would appear to preempt IRQ<B>, which would be an interesting difference in
behaviour from the PoV of the guest.

Ian.

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.