[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [Draft D] Xen on ARM vITS Handling



On 05/06/15 14:20, Ian Campbell wrote:
> On Fri, 2015-06-05 at 13:15 +0100, Julien Grall wrote:
>>>> I'm struggling to see how this would work. After enabling/disabling an
>>>> IRQ you need to send an INV which require the devID and the eventID.
>>>
>>> This is a detail for the pits driver, which is deliberately out of scope
>>> here. The interface to core Xen system is enable_irq/disable_irq and so
>>> that is what we must use here.
>>>
>>> I imagine the pits driver will need to store some details about this
>>> somewhere for its own internal purposes, perhaps in struct its_device.
>>
>> It's rather strange to put this out of scope because it would have
>> solved your issue (i.e getting the vDevID, ID).
> 
> Details of the pits are out of scope so the fact that enabling or
> disabling an interrupt needs a specific set of operations and info is
> out of scope.

I wasn't speaking about enable/disable IRQ but your paragraph "Note that
in the context of this emulation we do not have access to a
Device ID, and therefore cannot make decisions based on whether the
LPI/event has been `MAPD`d etc. In any case we have an `lpi` in our
hand and not an `event`, IOW we would need to do a _reverse_ lookup in
the ITT."

> The fact that a vLPU->pLPI mapping might be helpful for some other issue
> within the vits is a separate consideration. You might be correct that
> this would help the LPI CFG emulation case, but: 
> 
>> If you have a mapping vLPI -> pLPI, you can therefore get the its_device
>> which store the vDevID and you can device the eventID.
> 
> Previously you mentioned multiple (device,event) mapping to the same
> (v)LPI. I hoped we could rule this out but I wasn't sure, and frankly
> I'm no longer hopeful that it is the case.
> 
> That possibility makes vPLI->pLPI mapping trickier (if not impossible)
> to maintain.
>
>>>> This is not the only issue. With your solution, you let the possibility
>>>> to the guest having one vLPI for multiple (devID, ID)/LPI.
>>>
>>> I'm not sure, but isn't this UNPREDICTABLE or something? The "Where:"
>>> under Figure 19 (and most of the other related tables) suggests that the
>>> the identifier must be unique. I'd like to find a stronger statement if
>>> we were to rely on this though, true.
>>
>> I think so. But you are assuming that the guest is nice and will never
>> touch the memory provisioned to the ITS (ITT, Device Table...).
> 
> No, I'm not. The document was very clear that Xen should never trust
> anything written in that memory and should always carefully bounds check
> it.
> 
> But if a guest wants to scribble over that memory and mess up its own
> interrupt handling it is welcome to do so, just like on native. We don't
> need to protect the guest against itself, just the host against the
> guest.

I should have make clear that I only care about Xen and not the guest
shooting itself.

>> The vLPI is stored in the guest memory. A malicious guest could decide
>> to modify different ITT entries to point to the same vLPI. It would be
>> impossible to know which pLPI we have to EOI.
> 
> There may be an issue here. Thinking out loud:
> 
> At the point we come to inject a vPLI we do (or could) know the pLPI,
> because we have the irq_desc/guest_irq (or something which would let us
> find them) in our hands.

Agreed.

> 
> On native it's unclear if mapping multiple (Device,Event) pairs to a
> single LPI is valid (or a single (Collection,LPI) pair, I'm not sure it
> matters). I can't find where it is rules out, so lets assume it is
> allowed.
> 
> In that case what would happen if a (DeviceB,EventB) triggers a given
> LPI which has already been triggered and is pending due to
> (DeviceA,EventA). LPIs are edge triggered so I think the second
> interrupt would simply be ignored.
> 
> Can we do the same? Would this be harmful to anyone other than the guest
> itself (who would no longer receive some interrupts, but its their own
> fault).

As talk IRL, I think it's fine.

irq_desc->status may be inconsistent with the state of the hardware at
some point. It's required during domain destruction to know if we need
to EOI an IRQ assigned to a guest. But we could go through the IRQ and
EOI not matters of the status for a first approach.

>>>>  Even if we
>>>> validate it when the command MAPI/MAPVI is sent, the guest could modify
>>>> himself the ITT.
>>>>
>>>> Furthermore, each virtual IRQ of a domain is associated to a struct
>>>> pending_irq. This structure contains an irq_desc which is a pointer to
>>>> the physical IRQ. With a craft ITT the guest could mess up those
>>>> datastructure. Although, I think we can get back on our feet after the
>>>> domain is destroyed.
>>>>
>>>> After reading this draft, I think we can avoid to browse the Device
>>>> Table and the ITT. As said on the previous paragraph, the pending_irq
>>>> structure is linked to an irq_desc. In your proposal, you suggested to
>>>> store the its_device in the irq_guest (part of irq_desc). If we make
>>>> usage of pending_irq->desc to store the physical descriptor, we can have
>>>> a mapping vLPI <=> pLPI. Therefore, this would resolve UI1 and AFAICT,
>>>> the memory usage would be the same in Xen of the ITT/Device base solution.
>>>
>>> If you have a scheme along those lines that would be good. From your
>>> next message:
>>>
>>>> BTW, I will suggest a pseudo-code based on your draft tomorrow. It may 
>>>> be easier to understand.
>>>
>>> Yes, please, but please update everything which is affected, add new
>>> sections as necessary etc so the document remains a self consistent
>>> plausible design.
>>
>> I'm afraid to say that I won't be able to come up with a full proposal
>> today and I will be away from tonight until the 15th of June.
>>
>> I can try to suggest few lead but I still need to think about the INT
>> problem.
> 
> Be aware that I _have_ thought about the INT problem, which resulted in
> the current design and specifically in the decision to keep the device
> table (and by extension the ITTs) in guest RAM. This wasn't a whim on my
> part, I thought quite hard about it.

I know :/. I read your answer on Vijay's mail. I don't much like the
suggestion made for making things work.

After your last answer and the talk IRL, I'm fine with this design.

Regards,

-- 
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.