|
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] [xen staging] x86/hvm: limit memory type cache flush to running domains
commit 3125ca0da2d80123823ff24148467504aa4c7f13
Author: Roger Pau Monne <roger.pau@xxxxxxxxxx>
AuthorDate: Thu May 15 14:38:51 2025 +0200
Commit: Roger Pau Monne <roger.pau@xxxxxxxxxx>
CommitDate: Tue May 20 16:35:52 2025 +0200
x86/hvm: limit memory type cache flush to running domains
Avoid the cache flush if the domain is not yet running. There shouldn't be
any cached data resulting from domain accesses that need flushing, as the
domain hasn't run yet.
There can be data in the caches as a result of Xen and/or toolstack
behavior. Ideally we would do a cache flush strictly before starting the
domain, however doing so only makes sense once we can guarantee there are
no leftover mappings of the affected ranges with cacheable attributes,
otherwise the CPU can speculatively populate the cache with data from those
ranges.
No change in domain observable behavior intended.
Signed-off-by: Roger Pau Monné <roger.pau@xxxxxxxxxx>
Reviewed-by: Jan Beulich <jbeulich@xxxxxxxx>
---
xen/arch/x86/hvm/mtrr.c | 8 +++++++-
1 file changed, 7 insertions(+), 1 deletion(-)
diff --git a/xen/arch/x86/hvm/mtrr.c b/xen/arch/x86/hvm/mtrr.c
index 402a1d9263..8e1e15af8d 100644
--- a/xen/arch/x86/hvm/mtrr.c
+++ b/xen/arch/x86/hvm/mtrr.c
@@ -783,7 +783,13 @@ HVM_REGISTER_SAVE_RESTORE(MTRR, hvm_save_mtrr_msr, NULL,
hvm_load_mtrr_msr, 1,
void memory_type_changed(struct domain *d)
{
if ( (is_iommu_enabled(d) || cache_flush_permitted(d)) &&
- d->vcpu && d->vcpu[0] && p2m_memory_type_changed(d) )
+ d->vcpu && d->vcpu[0] && p2m_memory_type_changed(d) &&
+ /*
+ * Do the p2m type-change, but skip the cache flush if the domain is
+ * not yet running. The check for creation_finished must strictly be
+ * done after the call to p2m_memory_type_changed().
+ */
+ d->creation_finished )
{
flush_all(FLUSH_CACHE);
}
--
generated by git-patchbot for /home/xen/git/xen.git#staging
|
![]() |
Lists.xenproject.org is hosted with RackSpace, monitoring our |