|
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-devel] [PATCH RESEND v5 1/6] xen/arm: Implement hvm save and restore
To the list this time as well.
On Wed, 2013-11-13 at 11:47 +0000, Ian Campbell wrote:
> On Wed, 2013-11-13 at 15:50 +0400, Eugene Fedotov wrote:
> > 13.11.2013 14:56, Ian Campbell ÐÐÑÐÑ:
> > > On Wed, 2013-11-13 at 12:00 +0400, Eugene Fedotov wrote:
> > >
> > >>> [...]
> > >>>> +static int vgic_irq_rank_save(struct vgic_rank *ext,
> > >>>> + struct vgic_irq_rank *rank)
> > >>>> +{
> > >>>> + spin_lock(&rank->lock);
> > >>> This is probably wise, but the domain must be paused at this point,
> > >>> right?
> > >> I think this lock can be removed, thanks.
> > > I'd be happy for it to stay as a belt-and-braces/good practice thing.
> > >
> > >>>> + return -EINVAL;
> > >>>> + }
> > >>>> + memcpy(ext->ipriority, rank->ipriority, sizeof(rank->ipriority));
> > >>>> + /* ITARGETS */
> > >>>> + if ( sizeof(rank->itargets) != sizeof (ext->itargets) )
> > >>>> + {
> > >>>> + dprintk(XENLOG_G_ERR, "hvm_hw_gic: check itargets dumping
> > >>>> space\n");
> > >>>> + return -EINVAL;
> > >>>> + }
> > >>>> + memcpy(ext->itargets, rank->itargets, sizeof(rank->itargets));
> > >>>> + spin_unlock(&rank->lock);
> > >>>> + return 0;
> > >>>> +}
> > >>>> +
> > >>>> +static int gic_save(struct domain *d, hvm_domain_context_t *h)
> > >>>> +{
> > >>>> + struct hvm_hw_gic ctxt;
> > >>>> + struct vcpu *v;
> > >>>> +
> > >>>> + /* Save the state of GICs */
> > >>>> + for_each_vcpu( d, v )
> > >>> Where is the GICD state saved then?
> > >> The only GICD structure we save for guest domain is struct
> > >> vgic_irq_rank, it includes: IENABLE, IACTIVE, IPEND, PENDSGI, ICFG,
> > >> IPRIORITY, ITARGETS registers. We create the same structure inside hvm :
> > >> vgic_rank (that is no guaranteed to be the same as struct vgic_irq_rank)
> > >> and save it calling vgic_irq_rank_save routine below in gic_save.
> > > I can only see one call to vgic_irq_rank_save which is the one to save
> > > PPI state within the vcpu loop. What about the (per-cpu) SGI and
> > > (global) SPIs? I can't see where either of those are saved.
>
> > 1) For the guest domain vgic.nr_lines was set to 0 (arch_domain
> > structure, see "domain_vgic_init" function in vgic.c):
> > d->arch.vgic.nr_lines = 0; /* We don't need SPIs for the guest */
> > Should we not rely on this, assuming that SPIs will be enabled for guest?
>
> Aha! I'd forgotten about that. I think we can safely ignore SPIs until
> device passthrough is added. And even then I don't think we would want
> to migrate with an active passthrough device attached.
>
> > 2) Could you point me where is SGI states are situated in arch_vcpu? I
> > thought they are inside those 32 IRQs (namely 0-15 IRQs) that we have
> > already saved, because vgic_irq_rank in arch_vcpu is the context for
> > those 32 IRQs.
>
> SGIs are "interrupts" 0..15 which PPIs are "interrupts" 16..31, so if
> you are saving the first rank which contains all 32 of those then I
> think you have probably covered them already. I think you just need to
> update the comment(s) to mention SGI as well as PPI.
>
> Thanks,
>
> Ian
>
_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel
|
![]() |
Lists.xenproject.org is hosted with RackSpace, monitoring our |