[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH 1/3] xen/vt-d: need barriers to workaround CLFLUSH



>>> On 06.05.15 at 09:26, <tiejun.chen@xxxxxxxxx> wrote:
> On 2015/5/6 15:12, Jan Beulich wrote:
>>>>> On 05.05.15 at 18:11, <boris.ostrovsky@xxxxxxxxxx> wrote:
>>> On 05/05/2015 11:58 AM, Jan Beulich wrote:
>>>>>>> On 05.05.15 at 17:46, <boris.ostrovsky@xxxxxxxxxx> wrote:
>>>>> On 05/04/2015 05:14 AM, Andrew Cooper wrote:
>>>>>> On 04/05/2015 09:52, Jan Beulich wrote:
>>>>>>>>>> On 04.05.15 at 04:16, <tiejun.chen@xxxxxxxxx> wrote:
>>>>>>>> --- a/xen/drivers/passthrough/vtd/x86/vtd.c
>>>>>>>> +++ b/xen/drivers/passthrough/vtd/x86/vtd.c
>>>>>>>> @@ -56,7 +56,9 @@ unsigned int get_cache_line_size(void)
>>>>>>>>
>>>>>>>>     void cacheline_flush(char * addr)
>>>>>>>>     {
>>>>>>>> +    mb();
>>>>>>>>         clflush(addr);
>>>>>>>> +    mb();
>>>>>>>>     }
>>>>>>> I think the purpose of the flush is to force write back, not to evict
>>>>>>> the cache line, and if so wmb() would appear to be sufficient. As
>>>>>>> the SDM says that's not the case, a comment explaining why wmb()
>>>>>>> is not sufficient would seem necessary. Plus in the description I
>>>>>>> think "serializing" needs to be changed to "fencing", as serialization
>>>>>>> is not what we really care about here. If you and the maintainers
>>>>>>> agree, I could certainly fix up both aspects while committing.
>>>>>> On the subject of writebacks, we should get around to alternating-up the
>>>>>> use of clflushopt and clwb, either of which would be better than a
>>>>>> clflush in this case (avoiding the need for the leading mfence).
>>>>>>
>>>>>> However, the ISA extension document does not indicate which processors
>>>>>> will have support for these new instructions.
>>>>> https://software.intel.com/sites/default/files/managed/0d/53/319433-022.pdf
>>>>>  
>>>>>
>>>>> We really should add support for this. On shutting down a very large
>>>>> guest (hundreds of GB) we observed *minutes* spent in flushing IOMMU.
>>>>> This was due to serializing nature of CLFLUSH.
>>>> But flushing the IOMMU isn't being done via CPU instructions, but
>>>> rather via commands sent to the IOMMU. I.e. I'm somewhat
>>>> confused by your reply.
>>>
>>> I didn't mean flushing IOMMU itself, sorry. I meant
>>> __iommu_flush_cache() (or whatever it's equivalent we had in the
>>> product, which was 4.1-based).
>>
>> In that case I wonder how much of that flushing is really necessary
> 
> Sorry, what is that case?

The flushing CPU of side caches during IOMMU teardown for a guest.

>> during IOMMU teardown. VT-d maintainers?
>>
> 
> In most cases __iommu_flush_cache() is used to flush any remapping 
> structures into memory then IOMMU can get proper data.

Right, but here we're talking about teardown. Since the IOMMU
isn't to use any mappings anymore anyway for the dying guest,
there's little point in flushing respective changes from caches (or
at least that flushing could be limited to the ripping out of top
level structures, provided these get zapped before anything
hanging off of them).

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.