[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] Xen ARM community call - meeting minutes and date for the next one



Hi Julien,



On 30 March 2017 at 22:57, Julien Grall <julien.grall@xxxxxxx> wrote:
> Hi Volodymyr
>
> On 30/03/2017 20:19, Volodymyr Babchuk wrote:
>>
>> On 30 March 2017 at 21:52, Stefano Stabellini <sstabellini@xxxxxxxxxx>
>> wrote:
>>>
>>> On Thu, 30 Mar 2017, Volodymyr Babchuk wrote:
>>
>> And yes, my profiler shows that there are ways to further decrease
>> latency. Most obvious way is to get rid of 2nd stage translation and
>> thus eliminate p2m code from the call chain. Currently hypervisor
>> spends 20% of time in spinlocks code and about ~10-15% in p2m. So
>> there definitely are areas to improve :)
>
>
> Correct me if I am wrong. You are suggesting to remove stage-2 MMU
> translation, right? If so, what are the benefits to have some Xen code
> running at EL0?
Because that will be loadable code :) Also, this will provide some
degree of isolation as this code will communicate with Xen only via
SVCs.

Look, we need two stages in the first place because conventional guest
wants to control own MMU. But Xen native app (lets call it in this
way) should not control MMU. In my hack I had to create stage-1 table
with 1:1 mapping to make things work. Actually.... it just came to me
that I can try to disable stage 1 MMU and leave only stage 2. Not sure
if it is possible, need to check TRM...
But, anyways, my initial idea was to disable second stage MMU (drop VM
bit in HCR_EL2) and program only TTBR0_EL1 with friends. With this
approach there will be no need to save/restore p2m context when I
switch from guest context to app context and back.

-- 
WBR Volodymyr Babchuk aka lorc [+380976646013]
mailto: vlad.babchuk@xxxxxxxxx

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
https://lists.xen.org/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.