[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH 1/6] xen: extend XEN_DOMCTL_memory_mapping to handle cacheability



On Thu, 25 Apr 2019, Jan Beulich wrote:
> >>> On 17.04.19 at 23:12, <sstabellini@xxxxxxxxxx> wrote:
> > On Wed, 27 Feb 2019, Jan Beulich wrote:
> >> >>> On 27.02.19 at 00:07, <sstabellini@xxxxxxxxxx> wrote:
> >> > --- a/xen/include/public/domctl.h
> >> > +++ b/xen/include/public/domctl.h
> >> > @@ -571,12 +571,14 @@ struct xen_domctl_bind_pt_irq {
> >> >  */
> >> >  #define DPCI_ADD_MAPPING         1
> >> >  #define DPCI_REMOVE_MAPPING      0
> >> > +#define CACHEABILITY_DEVMEM      0 /* device memory, the default */
> >> > +#define CACHEABILITY_MEMORY      1 /* normal memory */
> >> >  struct xen_domctl_memory_mapping {
> >> >      uint64_aligned_t first_gfn; /* first page (hvm guest phys page) in 
> > range */
> >> >      uint64_aligned_t first_mfn; /* first page (machine page) in range */
> >> >      uint64_aligned_t nr_mfns;   /* number of pages in range (>0) */
> >> >      uint32_t add_mapping;       /* add or remove mapping */
> >> > -    uint32_t padding;           /* padding for 64-bit aligned structure 
> >> > */
> >> > +    uint32_t cache_policy;      /* cacheability of the memory mapping */
> >> >  };
> >> 
> >> I don't think DEVMEM and MEMORY are anywhere near descriptive
> >> enough, nor - if we want such control anyway - flexible enough. I
> >> think what you want is to actually specify cachability, allowing on
> >> x86 to e.g. map frame buffers or alike WC. The attribute then
> >> would (obviously and necessarily) be architecture specific.
> > 
> > Yes, I agree with what you wrote, and also with what Julien wrote. Now
> > the question is how do you both think this should look like in more
> > details:
> > 
> > - are you OK with using memory_policy instead of cache_policy like
> >   Julien's suggested as name for the field?
> 
> Yes - in fact either is fine to me.
> 
> > - are you OK with using #defines for the values?
> 
> Yes.
> 
> > - should the #defines for both x86 and Arm be defined here or in other
> >   headers?
> 
> I'd say here, but I wouldn't object to placement in arch-
> specific public headers.
> 
> > - what values would you like to see for x86?
> 
> Unless you intend to implement the function for x86, I'd
> suggest not adding any x86 #define-s at all for now.
> 
> But I agree with Julien (in case this wasn't explicit enough from
> my earlier replay) that it first needs to be clarified whether such
> an interface is wanted in the first place.

I have written down a few more details about the use-case elsewhere,
I'll copy/paste here:

  Xilinx MPSoC has two Cortex R5 cpus in addition to four Cortex A53 cpus
  on the board.  It is also possible to add additional Cortex M4 cpus and
  Microblaze cpus in fabric. There could be dozen independent processors.
  Users need to exchange data between the heterogeneous cpus. They usually
  set up their own ring structures over shared memory, or they use
  OpenAMP.  Either way, they need to share a cacheable memory region
  between them.  The MPSoC is very flexible and the memory region can come
  from a multitude of sources, including a portion of normal memory, or a
  portion of a special memory area on the board. There are a couple of
  special SRAM banks 64K or 256K large that could be used for that. Also,
  PRAM can be easily added in fabric and used for the purpose.

At the very least to handle the special memory regions, we need to be
able to allow iomem to map them as cacheable memory to a DomU. So I do
think we need this interface extension.

Let me know if you still have any doubts/questions. Otherwise I'll work
toward respinning the series in the proposed direction.

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxxx
https://lists.xenproject.org/mailman/listinfo/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.