[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH v5 11/13] xen/arm: mmu: move MMU specific P2M code to mmu/p2m.{c,h}



Hi Henry,

On 14/08/2023 05:25, Henry Wang wrote:
From: Penny Zheng <penny.zheng@xxxxxxx>

Current P2M implementation is designed for MMU system only.
We move the MMU-specific codes into mmu/p2m.c, and only keep generic
codes in p2m.c, like VMID allocator, etc. We also move MMU-specific
definitions and declarations to mmu/p2m.h, such as p2m_tlb_flush_sync().
Also expose previously static functions p2m_vmid_allocator_init(),
p2m_alloc_vmid(), __p2m_set_entry() and setup_virt_paging_one()

Looking at the code, it seemsm that you need to keep expose __p2m_set_entry() because of p2m_relinquish_mapping(). However, it is not clear how this code is supposed to work for the MPU. So should we instead from p2m_relinquish_mapping() to mmu/p2m.c?

Other functions which doesn't seem to make sense in p2m.c are:
* p2m_clear_root_pages(): AFAIU there is no concept of root in the MPU. This also means that we possibly want to move out anything specific to the MMU from 'struct p2m'. This could be done separately. * p2m_flush_vm(): This is built with MMU in mind as we can use the page-table to track access pages. You don't have that fine granularity in the MPU.

for futher MPU usage.

typo: futher/further/


With the code movement, global variable max_vmid is used in multiple
files instead of a single file (and will be used in MPU P2M
implementation), declare it in the header and remove the "static" of
this variable.

Add #ifdef CONFIG_HAS_MMU to p2m_write_unlock() since future MPU
work does not need p2m_tlb_flush_sync().

And there are no specific barrier required? Overall, I am not sure I like the #ifdef rather than providing a stub helper.

If the other like the idea of the #ifdef. I think a comment on top would be necessary to explain why there is nothing to do in the context of the MPU.

Cheers,

--
Julien Grall



 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.