[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
Re: [XEN][PATCH 1/3] x86/hvm: move hvm_shadow_handle_cd() in vmx code
- To: Jan Beulich <jbeulich@xxxxxxxx>
- From: Grygorii Strashko <grygorii_strashko@xxxxxxxx>
- Date: Thu, 6 Nov 2025 17:46:50 +0200
- Arc-authentication-results: i=1; mx.microsoft.com 1; spf=pass smtp.mailfrom=epam.com; dmarc=pass action=none header.from=epam.com; dkim=pass header.d=epam.com; arc=none
- Arc-message-signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector10001; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=AxG1DiFa6djj21dc9zlgpnIR6yeIssmXx38CcV7BASQ=; b=EYMmT4jcU4UaeDzAutX6xtRwo3L+yxdSuPZJC4B04P1+UIF+CF9d3BSFMfpm8ssQ6E1NNoMZe30G4ydqK7JUlj2T9Ag4AIJb8GOzDHoqER1Ze4i+C7nmJzsmBHVuao2C9ps5rLpsXquF/HsPlWd3vmirNGnP+hCaesPWhOtOe32D8euR6qjUbmT3C4IPDXMrTlZtxuQHeB1IkxFMvS6791uuOZ5E0G0K2TNWKLozn9D1IHfdpCR+1wDXhlRnjjil3Sdfq0bWtlD7250PK4YqXMdKfZ4M16DBivz4vbSj486KtpJoB3oLowEMDG3KVBKWKXp7qFv8T9aZNF+8Yu1PQg==
- Arc-seal: i=1; a=rsa-sha256; s=arcselector10001; d=microsoft.com; cv=none; b=xh+te1VdcD0cb9l4u12ekF8PgYZSEyVOlhSK1+Tq7bvkeg5UmKDmZKl1GwdPuo5cW2f9B56NqeZy+No3Tzlb7edp5mepKe16bulu2DUAq38qBsxsKZBEbXiIBX6x4NW6hmH/SXJ7lBGzc6nSmZT6dZK8PVVjrYKp0M22xZJ+Kh8o4mjybvr7xq0VhhZcFcM80+Q1yV6/44IIbdjGQZPhEjgOOx0d/uX39rI5ekjfE6P8Ep9HGIY0XevefgdfjbAi92+WxNGDubmkThOlRGqNoFEWZ437y999B3m95Gb4Y57fK5yGWTmdTEgpIeNRFsVap8y+/fLfYNmX/XHXrDkvPQ==
- Authentication-results: dkim=none (message not signed) header.d=none;dmarc=none action=none header.from=epam.com;
- Cc: Andrew Cooper <andrew.cooper3@xxxxxxxxxx>, Roger Pau Monné <roger.pau@xxxxxxxxxx>, Jason Andryuk <jason.andryuk@xxxxxxx>, Teddy Astie <teddy.astie@xxxxxxxxxx>, "xen-devel@xxxxxxxxxxxxxxxxxxxx" <xen-devel@xxxxxxxxxxxxxxxxxxxx>
- Delivery-date: Thu, 06 Nov 2025 15:47:03 +0000
- List-id: Xen developer discussion <xen-devel.lists.xenproject.org>
Hi
On 30.10.25 14:47, Jan Beulich wrote:
On 30.10.2025 13:28, Grygorii Strashko wrote:
On 30.10.25 13:08, Jan Beulich wrote:
On 30.10.2025 00:54, Grygorii Strashko wrote:
From: Grygorii Strashko <grygorii_strashko@xxxxxxxx>
Functions:
hvm_shadow_handle_cd()
hvm_set_uc_mode()
domain_exit_uc_mode()
are used only by Intel VMX code, so move them in VMX code.
Nit: I think both in the title and here you mean "to" or "into".
While here:
- minor format change in domain_exit_uc_mode()
- s/(0/1)/(false/true) for bool types
No functional changes.
Signed-off-by: Grygorii Strashko <grygorii_strashko@xxxxxxxx>
You did read Andrew's request to also move the involved structure field(s),
didn't you? Oh, wait - maybe that's going to be the subject of patch 3.
yes. it is patch 3 - It is not small.
And I really wanted this patch to contain as less modifications as possible on
top of code moving.
I wonder what other x86 maintainers think here.
While
often splitting steps helps, I'm not sure that's very useful here. You're
touching again immediately what you just have moved, all to reach a single
goal.
@@ -1421,6 +1422,64 @@ static void cf_check vmx_set_segment_register(
vmx_vmcs_exit(v);
}
+/* Exit UC mode only if all VCPUs agree on MTRR/PAT and are not in no_fill. */
+static bool domain_exit_uc_mode(struct vcpu *v)
+{
+ struct domain *d = v->domain;
+ struct vcpu *vs;
+
+ for_each_vcpu(d, vs)
+ {
+ if ( (vs == v) || !vs->is_initialised )
+ continue;
+ if ( (vs->arch.hvm.cache_mode == NO_FILL_CACHE_MODE) ||
+ mtrr_pat_not_equal(vs, v) )
+ return false;
+ }
+
+ return true;
+}
+
+static void hvm_set_uc_mode(struct vcpu *v, bool is_in_uc_mode)
+{
+ v->domain->arch.hvm.is_in_uc_mode = is_in_uc_mode;
+ shadow_blow_tables_per_domain(v->domain);
+}
Similarly I wonder whether this function wouldn't better change to taking
struct domain * right away. "v" itself is only ever used to get hold of
its domain. At the call sites this will then make obvious that this is a
domain-wide operation.
Agree. but..
In this patch I wanted to minimize changes and do modifications step by step.
I can add additional patch such as "rework struct domain access in cache disable
mode code".
Will it work?
I'm planning to resend with:
- incorporating struct domain * as parameter in hvm_set_uc_mode()
+static void hvm_shadow_handle_cd(struct vcpu *v, unsigned long value)
+{
+ if ( value & X86_CR0_CD )
+ {
+ /* Entering no fill cache mode. */
+ spin_lock(&v->domain->arch.hvm.uc_lock);
+ v->arch.hvm.cache_mode = NO_FILL_CACHE_MODE;
+
+ if ( !v->domain->arch.hvm.is_in_uc_mode )
+ {
+ domain_pause_nosync(v->domain);
+
+ /* Flush physical caches. */
+ flush_all(FLUSH_CACHE_EVICT);
+ hvm_set_uc_mode(v, true);
+
+ domain_unpause(v->domain);
+ }
+ spin_unlock(&v->domain->arch.hvm.uc_lock);
+ }
+ else if ( !(value & X86_CR0_CD) &&
+ (v->arch.hvm.cache_mode == NO_FILL_CACHE_MODE) )
+ {
+ /* Exit from no fill cache mode. */
+ spin_lock(&v->domain->arch.hvm.uc_lock);
+ v->arch.hvm.cache_mode = NORMAL_CACHE_MODE;
+
+ if ( domain_exit_uc_mode(v) )
+ hvm_set_uc_mode(v, false);
+
+ spin_unlock(&v->domain->arch.hvm.uc_lock);
+ }
+}
This function, in turn, could do with a local struct domain *d.
- incorporating struct domain * as parameter local var
static int cf_check vmx_set_guest_pat(struct vcpu *v, u64 gpat)
{
if ( !paging_mode_hap(v->domain) ||
Why did you put the code above this function? It's solely a helper of
vmx_handle_cd(), so would imo best be placed immediately ahead of that one.
Right. Hence vmx_x_guest_pat() are also used by vmx_handle_cd() I decided to
put before them.
The main purpose of vmx_set_guest_pat() is, however, its use as a hook function.
It's merely an optimization that the function is called directly by VMX code.
- moving code before vmx_handle_cd().
Bottom line: The change could go in as is, but imo it would be nice if it
was tidied some while moving.
I'd be very much appreciated if this could happen.
"this" being what out of the two or more possible options? (I take it you mean
"could go in as is", but that's guesswork.)
I'm not goint to squah rest of the series.
--
Best regards,
-grygorii
|