[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH 2/2] x86/pv: Provide better SYSCALL backwards compatibility in FRED mode


  • To: Jan Beulich <jbeulich@xxxxxxxx>
  • From: Andrew Cooper <andrew.cooper3@xxxxxxxxxx>
  • Date: Thu, 26 Mar 2026 21:05:15 +0000
  • Arc-authentication-results: i=1; mx.microsoft.com 1; spf=pass smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com; dkim=pass header.d=citrix.com; arc=none
  • Arc-message-signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector10001; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=ZyXi2mDw2yn5bYl8URswAtqjBe1qdET2GRNQUUqZMLo=; b=CtMDHXNV2HcT1IGiFJm2y3vYfVCXkhguDDNI8Deynwn1+YPwiYxgIloO9mIqCyH+pah2TUtUBjqKAjcjjMUwUCv9NYEnteEa9hbheOFHxZzuFLZmsICzWck1bt+Hv5puFKVM1rYZL5Frk47+jVsr99IU1vyHNV6IUyd0N0JFqqCRCfi8XUwLQrnkcUqrzVTcnwVK/eWSoXDrEt+AoWAcqdm5GeZy8HfCWi+li08VdyklC6LTRXQ4m93I5iUEvj1Md9kddtmrZE8WZKITToOSAU3ddTeXvAhlqAqULTjzWBDASEunog+3MsZ/YnOWFMv/YLz+qBHz1T/bArV3JCmn+Q==
  • Arc-seal: i=1; a=rsa-sha256; s=arcselector10001; d=microsoft.com; cv=none; b=iPpDi3b69Lj476m+Na6ZxiM1VPiRPmV7M8XytDhIgaKv/lg4lImnnSE4ZrTGg2GMVZt9jhFZoDTVLW/yMGr3BAFOG/+f2Yp2afArTFFOGnhTyZrr/skpMkzTV8w6hYt5SmNQaPnCE92ALcvAnE4IwrvbKRfXyRfI3QkLCIJCryZI3gQ/QPkhOuqgjbBgbbx1R4ZVFabQXg9Bk0QN/iP7nfCTaalKDihGoHwTu2j+B3BzIC9T02nTEdJv+vDoJh6tAt7GZwgOnXr53PMskrOYQMaa7xLucp7LaGyQszLYeDLzS79KIf03v7FPbPv2MCICkzxCdzbxfTem5+BPVsPXRA==
  • Authentication-results: eu.smtp.expurgate.cloud; dkim=pass header.s=selector1 header.d=citrix.com header.i="@citrix.com" header.h="From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck"
  • Authentication-results: dkim=none (message not signed) header.d=none;dmarc=none action=none header.from=citrix.com;
  • Cc: Andrew Cooper <andrew.cooper3@xxxxxxxxxx>, Roger Pau Monné <roger.pau@xxxxxxxxxx>, Xen-devel <xen-devel@xxxxxxxxxxxxxxxxxxxx>
  • Delivery-date: Thu, 26 Mar 2026 21:05:43 +0000
  • List-id: Xen developer discussion <xen-devel.lists.xenproject.org>

On 26/03/2026 9:14 am, Jan Beulich wrote:
> On 25.03.2026 18:02, Andrew Cooper wrote:
>> In FRED mode, the SYSCALL instruction does not modify %rcx/%r11.  Software
>> using SYSCALL spills %rcx/%r11 around the invocation, which is why FRED not
>> doing this goes largely unnoticed.
>>
>> However, consider the following migration scenario:
>>
>>  * VM suspends.  Hypercall, so SYSCALL, %rcx/%r11 left unmodified
>>  * VM moves to a non-FRED system
>>  * Xen resumes the VM with a real SYSRET instruction
>>
>> Instead of resuming at the instruction following the SYSCALL instruction, the
>> VM is resumed at whatever dead value was in %rcx.
> Would it? In restore_all_guest we load %r11 and %rcx from the stack
> frame's EFLAGS and RIP fields. If we didn't, various other things wouldn't
> work either.

Hmm.  I suppose so.  regs->rip/eflags is always going to be
reconstructed properly for the records in the transmitted stream.

What will be wrong is the %rcx/%r11 put onto the guest stack.

>
>> --- a/xen/arch/x86/traps.c
>> +++ b/xen/arch/x86/traps.c
>> @@ -2405,6 +2405,8 @@ void asmlinkage entry_from_pv(struct cpu_user_regs 
>> *regs)
>>  
>>              regs->ssx = l ? FLAT_KERNEL_SS   : FLAT_USER_SS32;
>>              regs->csx = l ? FLAT_KERNEL_CS64 : FLAT_USER_CS32;
>> +            regs->rcx = regs->rip;
>> +            regs->r11 = regs->rflags;
> Don't you also need to set TRAP_syscall here, for the new code in
> eretu_exit_to_guest to actually make a difference?

It is create_bounce_frame() which sets up TRAP_syscall.

>  (There actually is
> a paragraph about this in the comment out of context above, which then
> may also want adjusting.)
>
> Further a question as to limiting overhead: Doing this on every SYSCALL
> entry ...
>
>> @@ -26,7 +27,16 @@ FUNC(entry_FRED_R3, 4096)
>>  END(entry_FRED_R3)
>>  
>>  FUNC(eretu_exit_to_guest)
>> -        POP_GPRS
>> +        /*
>> +         * PV guests aren't aware of FRED.  If Xen in IDT mode would have 
>> used
>> +         * a SYSRET instruction, preserve the legacy behaviour for %rcx/%r11
>> +         */
>> +        testb   $TRAP_syscall >> 8, UREGS_entry_vector + 1(%rsp)
>> +
>> +        POP_GPRS /* Preserves flags */
>> +
>> +        cmovnz  EFRAME_rip(%rsp), %rcx
>> +        cmovnz  EFRAME_eflags(%rsp), %r11
> ... and every exit-to-guest isn't very nice when concern is about just the
> specific case of migrating FRED -> non-FRED. Couldn't we instead make the
> adjustment when generating the save record for the register state of the
> vCPU?

Ignoring migration for a moment, there are two further cases where
things go wrong.  Consider a VM which logically does this:

    // user mode
    SYSCALL
    mov %rcx, dbg_syscall_was_here

    // kernel mode
entry_SYSCALL:
    ... setup stack
    mov %rcx, UREGS_rip(%rsp)


Both of these positions under FRED have unexpected content in %rcx/%r11.

In userspace it is common to spill %rcx/%r11 and restore them around
SYSCALL, but that's not an ABI.  This is addressed by the hunk in
entry_from_pv().


For kernel, the only reason CALLBACKTYPE_syscall functions in the
slightest in staging right now is because Xen gives the guest an IRET
frame and Linux doesn't need to reconstruct UREGS_rip/eflags manually.

In this case, it's baked into the PV64 ABI that "you will be entered by
SYSRET, so you must pick up the interrupted %rcx/%r11 off the stack",
and it strictly only applies to kernel code, and more than that, the Xen
specific parts.

If this were the only problem case, we could make an argument to say
that it would be a compatible change in the PV64 ABI, except we still
get into problems when the guest kernel is using HYPERCALL_iret in
SYSRET mode.

Linux is dealing with this problem by adjusting their unit test which
spots it to skip this test when FRED is active.  I'm not convinced this
is the best move.

~Andrew



 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.