[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH for-4.5 v6 13/17] xen/arm: Data abort exception (R/W) mem_events.

On Mon, 2014-09-15 at 16:02 +0200, Tamas K Lengyel wrote:
> This patch enables to store, set, check and deliver LPAE R/W mem_events.
> As the LPAE PTE's lack enough available software programmable bits,
> we store the permissions in a Radix tree, where the key is the pfn
> of a 4k page. Only settings other than p2m_access_rwx are saved
> in the Radix tree.

It is here that it is absolutely essential to consider the overhead for
the non-xenaccess use case, and I think it should be written here in the
commit message.

I'm worried because I'm not seeing the obvious "xenaccess is disabled,
skip all this stuff" path I was hoping for.

I think the overhead of p2m modifications and on fault needs separate

For cases where xenaccess isn't enabled IMHO we need to be talking about
overhead in the ballpark of a pointer deref, compare, conditional jump.
Certainly not any kind of radix tree lookup etc.

> @@ -149,6 +152,74 @@ static lpae_t *p2m_map_first(struct p2m_domain *p2m, 
> paddr_t addr)
>      return __map_domain_page(page);
>  }
> +static void p2m_set_permission(lpae_t *e, p2m_type_t t, p2m_access_t a)
> +{

It would have been good to refactor this, and probably p2m_shatter_page
in precursor patches to reduce the size of this aleady pretty big patch.

> @@ -1159,6 +1329,236 @@ err:
>      return page;
>  }
> +bool_t p2m_mem_access_check(paddr_t gpa, vaddr_t gla, const struct npfec 
> npfec)

Can a bunch of this function not be common code? i.e. the inject an
event into a ring bits? Or even the automatic conversion of rx2rw etc.

> @@ -248,6 +249,23 @@ static inline bool_t p2m_mem_event_sanity_check(struct 
> domain *d)
>      return 1;
>  }
> +/* Send mem event based on the access (gla is -1ull if not available). 
> Boolean

Is a "gla" a "guest linear address"? I don't think we use that term on


Xen-devel mailing list



Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.