[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH v5 16/21] xen/arm: split vgic driver into generic and vgic-v2 driver



On Sun, 2014-06-15 at 18:04 +0100, Julien Grall wrote:
> You've reintroduced the XSA-94 here (see bf70db7 vgic: Check rank in
> GICD_ICFGR* emulation before locking). When you send a new version of a
> serie, please *check* there is no update on this code which may fix error.

One technique I use here is, given the patch in file "x":
        grep ^- x | cut -c2- > A
        grep ^\+ x | cut -c2- > B
        diff -u A B
        
It's far from perfect and relies on the code order not changing too
drastically over the movement, but it would have caught this.

> I saw you shared a part of the emulation between the distributor and the
> redistributor in GICv3. I think you can also share with GICv2, this
> could avoid fix in 2 places the same bug (or worst only fixing in 1 place).

I'm not convinced that sharing vgic-2 and vgic-3 code wouldn't end up
being more confusing in the long run.

> > -static int vgic_to_sgi(struct vcpu *v, register_t sgir)
> > +int vgic_to_sgi(struct vcpu *v, register_t sgir, enum gic_sgi_mode 
> > irqmode, int virq,
> > +                unsigned long vcpu_mask)
> 
> 
> You can't assume that all the VCPU bits will fit in an unsigned long. We
> will have to use cpumask_t at some point.
> 
> I'm fine if you don't handle it for now, but you need to *write down*
> somewhere the limitation of this function.

To be fair, this is a preexisting restriction and this is far from the
only place which will need fixing.

> [..]
> 
> > +    case SGI_TARGET_OTHERS:
> 
> [..]
> 
> > +    case SGI_TARGET_SELF:
> 
> For this 2 case, you can't assume that vcpu_mask will be equal to 0...
> It comes from the GICD_SGIR...

This is passed from the caller, isn't it?

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.