[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH 1/6] xen: extend XEN_DOMCTL_memory_mapping to handle cacheability



On Sun, 21 Apr 2019, Julien Grall wrote:
> > > > diff --git a/xen/arch/arm/p2m.c b/xen/arch/arm/p2m.c
> > > > index 30cfb01..5b8fcc5 100644
> > > > --- a/xen/arch/arm/p2m.c
> > > > +++ b/xen/arch/arm/p2m.c
> > > > @@ -1068,9 +1068,24 @@ int unmap_regions_p2mt(struct domain *d,
> > > >    int map_mmio_regions(struct domain *d,
> > > >                         gfn_t start_gfn,
> > > >                         unsigned long nr,
> > > > -                     mfn_t mfn)
> > > > +                     mfn_t mfn,
> > > > +                     uint32_t cache_policy)
> > > >    {
> > > > -    return p2m_insert_mapping(d, start_gfn, nr, mfn,
> > > > p2m_mmio_direct_dev);
> > > > +    p2m_type_t t;
> > > > +
> > > > +    switch ( cache_policy )
> > > > +    {
> > > > +    case CACHEABILITY_MEMORY:
> > > > +        t = p2m_ram_rw;
> > > 
> > > Potentially, you want to clean the cache here.
> > 
> > We have been talking about this and I have been looking through the
> > code. I am still not exactly sure how to proceed.
> > 
> > Is there a reason why cacheable reserved_memory pages should be treated
> > differently from normal memory, in regards to cleaning the cache? It
> > seems to me that they should be the same in terms of cache issues?
> 
> Your wording is a bit confusing. I guess what you call "normal memory" is
> guest memory, am I right?

Yes, right. I wonder if we need to come up with clearer terms. Given the
many types of memory we have to deal with, it might become even more
confusing going forward. Guest normal memory maybe? Or guest RAM?


> Any memory assigned to the guest is and clean & invalidate (technically clean
> is enough) before getting assigned to the guest (see flush_page_to_ram). So
> this patch is introducing a different behavior that what we currently have for
> other normal memory.

This is what I was trying to understand, thanks for the pointer. I am
unsure whether we want to do this for reserved-memory regions too: on
one hand, it would make things more consistent, on the other hand I am
not sure it is the right behavior for reserved-memory. Let's think it
through.

The use case is communication with other heterogeneous CPUs. In that
case, it would matter if a domU crashes with the ring mapped and an
unflushed write (partial?) to the ring. The domU gets restarted with the
same ring mapping. In this case, it looks like we would want to clean
the cache. It wouldn't matter if it is done at VM shutdown or at VM
creation time.

So maybe it makes sense to do something like flush_page_to_ram for
reserved-memory pages. It seems simple to do it at VM creation time,
because we could invalidate the cache when map_mmio_regions is called,
either there or from the domctl handler. On the other hand, I don't know
where to do it at domain destruction time because no domctl is called to
unmap the reserved-memory region. Also, cleaning the cache at domain
destruction time would introduce a difference compared to guest normal
memory.

I know I said the opposite in our meeting, but maybe cleaning the cache
for reserved-memory regions at domain creation time is the right way
forward?


> But my concern is you may inconsistently use the memory attributes breaking
> coherency. For instance, you map in Guest A with cacheable attributes then
> after the guest died, you remap to guest B with a non-cacheable attributes.
> guest B may have an inconsistent view of the memory mapped.

I think that anything caused by the user selecting the wrong
cacheability attributes is not something we have to support or care
about (more about it below).


> This is one case where cleaning the cache would be necessary. One could
> consider this is part of the "device reset" (this is a bit of the name abuse),
> so Xen should not take care of it.
> 
> The most important bit is to have documentation that reflect the issues with
> such parameters. So the user is aware of what could go wrong when using
> "iomem".

I agree. I'll be very clear about the consequences of choosing wrong or
inconsistent attributes in the docs. Also, I'll be very clear about
cache-flushing, documenting properly the implemented behavior (which at
the moment is no-cache-flushes).

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxxx
https://lists.xenproject.org/mailman/listinfo/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.