[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [RFC PATCH 11/13] cpufreq: add xen-cpufreq driver



Hi Jan,

On Tue, Oct 14, 2014 at 3:20 PM, Jan Beulich <JBeulich@xxxxxxxx> wrote:
>>>> On 13.10.14 at 16:29, <andrii.tseglytskyi@xxxxxxxxxxxxxxx> wrote:
>> On Mon, Oct 13, 2014 at 5:11 PM, Jan Beulich <JBeulich@xxxxxxxx> wrote:
>>>>>> On 13.10.14 at 15:38, <andrii.tseglytskyi@xxxxxxxxxxxxxxx> wrote:
>>>> It should be noticed that sometimes I2C transactions require platform
>>>> specific IPs.
>>>> For example OMAP3+ platforms contain HW spinlock IP (which is a real
>>>> HW module with its own clocks).
>>>> Each i2c_send call must acquire this HW spinlock. And this is
>>>> something we can't implement in Xen hypervisor.
>>>
>>> Do you really mean "can't", or rather "don't want to"? It's very
>>> hard for me to imagine something that absolutely can't be done
>>> in the hypervisor.
>>>
>>
>> I mean that we must deal with platform specific IP in this case. This
>> is dependency from specific HW, and driver will not be simple and
>> generic.
>> Also I think such interactions are out of scope for hypervisor.
>> What do you think?
>
> Nothing is really out of scope for the hypervisor. It's always a
> matter of judgment, and looking at the Linux i2c driver subtree I
> don't view its size as problematic (the more that I don't think
> you'd need all of it).
>

I would need to sync with dom0 I2C subtree. Taking in account that a
lot of peripheral uses I2C commands as low level command interface I
would need to know about this peripheral inside Xen, otherwise how can
I sync with it? Why should Xen maintain platform specific peripherals,
if dom0 already does this? I think it is overhead.


>>> Leaving aside that there are no real context switches between a
>>> domain and the hypervisor (only domains, or more precisely vCPU-s,
>>> get context switched), I'm not sure we need to be worried by these
>>> numbers. Whether they're problematic depends significantly on the
>>> time a full I2C command takes to issue (and perhaps complete). And
>>> then I'm sure you're aware that hypercalls can be batched, so as
>>> long as not every of these 50 commands depends on results from
>>> the immediately preceding one, the hypercall cost can certainly be
>>> amortized to a certain degree.
>>
>> But in case if each I2C command depends on results of previous one -
>> we can't use such calls, right? Can we really rely on this?
>> Some time ago I had a model (for testing which is not related to this
>> thread) where I sent about 20 hypercalls each second.
>> I observed lugs in such use cases as Video playback in domU (Android
>> Jelly Bean as domU). Maybe if we have only Xen and dom0 - everything
>> will be fine and we can send as many hypercalls as we want. But I'm
>> worrying in our case this will not work.
>
> If 20 hypercalls a second are a problem, then I think the device isn't
> capable enough in the first place to run a virtualized workload, and
> if it's so overloaded it's likely also not really useful to reduce the
> CPU frequency (as then you'd end up having even more performance
> problems).

For sure OMAP5+ platforms are capable enough to run virtualized workload.
But this solution may reduce performance a lot. These 20 hypercalls
per second will be quite heavy in our case.

regards,
Andrii

>
> Jan
>



-- 

Andrii Tseglytskyi | Embedded Dev
GlobalLogic
www.globallogic.com

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.