[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH v2] xen/arm: implement GICD_I[S/C]ACTIVER reads

> On Apr 3, 2020, at 11:59 AM, Marc Zyngier <maz@xxxxxxxxxx> wrote:
> George,
> On 2020-04-03 11:43, George Dunlap wrote:
>>> On Apr 3, 2020, at 9:47 AM, Marc Zyngier <maz@xxxxxxxxxx> wrote:
>>> On 2020-04-02 19:52, Julien Grall wrote:
>>>> (+Marc)
>>> Thanks for looping me in. Definitely an interesting read, but also a very
>>> puzzling one.
>> [snip]
>>> No. Low latency is a very desirable thing, but it doesn't matter at all when
>>> you don't even have functional correctness. To use my favourite car analogy,
>>> having a bigger engine doesn't help when you're about to hit the wall and
>>> have no breaks... You just hit the wall faster.
>> [snip]
>>> s/imprecise/massively incorrect/
>> [snip]
>>> There is just no way I'll ever accept a change to the GIC interrupt state
>>> machine for Linux. Feel free to try and convince other OS maintainers.
>> [snip]
>>> If I was someone developing a product using Xen/ARM, I'd be very worried
>>> about what you have written above. Because it really reads "we don't care
>>> about reliability as long as we can show amazing numbers". I really hope
>>> it isn't what you mean.
>> What's puzzling to me, is that what everyone else in this thread is
>> that what Stefano is trying to do is to get Xen to be have like KVM.
> Sorry, I don't get what you mean here. KVM at least aims to be architecturally
> compliant. Is it perfect? Most probably not, as we fix it all the time.
> Dealing with the active registers is hard. But as far as I can see,
> we do get them right. Do we sacrifice latency over correctness? Yes.
> And if you have spotted a problem in the way we handle those, pray tell.
>> Are they wrong?  If so, we can just do whatever Linux does.  If not,
>> then you need to first turn all your imprecations about correctness,
>> smashing into walls, concern for the sanity of maintainers and so on
>> towards your own code first.
> I'm really sorry, but you see to have the wrong end of the stick here.
> I'm not trying to compare Xen to KVM at all. I'm concerned about only
> implementing only a small part of the architecture, ignoring the rest,
> and letting guests crash, which is what was suggested here.

The current situation (as I understand it) is that Xen never implemented this 
functionality at all.

Stefano has been trying to convince Julien to implement this register KVM’s 
way, which has good latency in the median case, but in particular 
configurations, has arbitrarily bad worst-case latency for multi-vcpu guests.  
Julien thinks what KVM is doing is wrong and against the spec, and has been 
refusing to let the patches in. 

He has proposed another suggestion which is closer to the spec in terms of 
functionality, and has bounded worst-case latency, but which has worse latency 
and uncontrollable jitter in the median case (or at least, so Stefano believes).

As a compromise, Stefano suggested implementing KVM’s way for single-vcpu 
guests.  This is a strict improvement over the current situation, since at 
least a lot of new guests start working, while they hash out what to do about 
multi-vcpu guests.

My proposal has been to work with KVM do document this deviation from the spec 
for guests running virtualized.  

So it’s you have have the wrong end of the stick; your contempt is misplaced.

If you don’t think it’s a deviation, then please help us convince Julien, so we 
can take Stefano’s patch as-is.  Or, if Julien convinces you that KVM is 
deviating from the spec, then let’s try to work together to see how we can 
implement the necessary functionality efficiently in a virtualized environment.




Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.