[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH for-4.5 v11 4/9] xen/arm: Data abort exception (R/W) mem_events.



On 09/29/2014 01:47 PM, Tamas K Lengyel wrote:
>     > +/* Set access type for a region of pfns.
>     > + * If start_pfn == -1ul, sets the default access type */
>     > +long p2m_set_mem_access(struct domain *d, unsigned long pfn, uint32_t 
> nr,
>     > +                        uint32_t start, uint32_t mask, xenmem_access_t 
> access)
>     > +{
> 
> 
>     [..]
> 
>     > +
>     > +    rc = apply_p2m_changes(d, MEMACCESS,
>     > +                           pfn_to_paddr(pfn+start), 
> pfn_to_paddr(pfn+nr),
>     > +                           0, MATTR_MEM, mask, 0, a);
>     > +
>     > +    flush_tlb_domain(d);
>     > +    iommu_iotlb_flush(d, pfn+start, nr-start);
> 
>     With your solution, when rc == 0 (i.e the call memaccess has been fully
>     applied), you will have one more TLB flush: one in apply_p2m_changes,
>     the other one here...
> 
> 
> No, I don't flush it in apply_p2m_changes, *flush is not set to true in
> MEMACCESS in this version.

Oh right, sorry I haven't noticed it.

>     As this code already exists in apply_p2m_changes but in the wrong place,
>     why didn't you move it later?
> 
> 
> The problem is with the preemption case that just goes to out. I found
> it cleaner to just flush the tlb here for both cases instead of having
> the preemption case going to a flush: label then to out. If that's
> preferred, I'm OK with that approach too.

What is the issue to do?

out:
   if ( flush )
   {
      TLB flush
   }

   if ( rc < 0 && ( op == INSERT ....

   ...

   return rc;

I would prefer if you have to duplicate the flush here and at the same
time you will fix other flush issue in the different path in case of error.

Regards,

-- 
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.