[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[PATCH] OvmfPkg/OvmfXen: Fix S3


  • To: <devel@xxxxxxxxxxxxxx>
  • From: Xenia Ragiadakou <xenia.ragiadakou@xxxxxxx>
  • Date: Thu, 13 Jul 2023 13:47:12 +0300
  • Arc-authentication-results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 165.204.84.17) smtp.rcpttodomain=edk2.groups.io smtp.mailfrom=amd.com; dmarc=pass (p=quarantine sp=quarantine pct=100) action=none header.from=amd.com; dkim=none (message not signed); arc=none
  • Arc-message-signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=JYJ17GsHkvf3wzt1JRROVoT4+2yfsQVmFXOSjlfBPiM=; b=aog1K4YF/xyIbgR9DwskjEM7XnG0vfcvplVFUn3Wrik2SzLP7syjz5uZLYIaalvVKZviUSwjSruduWRcW7Jl7Haj0pI8aV44rey/jZTNR9XNtU04l9BcHEQ9ZREu+Jl1x23E0QEVhdt6NwxcEJjsjCUScl3mkiHU/nB1Z/lavJ9Ef7JpHYcy4/v/KOaxps2gJVcB5rkl7sDKCUmHes5pi76eiOIgQ9GHh/eIweeCy9L4fDngu+CUmUflFssBou/DEKPzqFqdBWs8L+i45BNIdGhBaK5RmzenFzByBvwtDrC5LBzqeHBhsupZbMZAUNVHjOZ9IO654LiWIqtJXGF8LA==
  • Arc-seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=WAyXxHTvSdqzH3BpepE0v7KHbb5IeAAm+Gfk8tEtBCCvBPuk1S4rLNaGDYbad3B7ZqggGGnLbcwlKI47esxc5xRZFniEAfR+OmqoWgyARwWJV48dpXM3DMW7mcRKOTKIKBGbJe7ZWcv8Z+QFSGnAEo46LPHcSF7CiMv+d5aiK3XEiRaIeG44Q2Tw7wjpzdypDREh3SfzGsJ39Kinc6zNBaYhRaX/iku5/Mh+CPycuYY1zddUfm5CcWg7fPafPriB9Og0sArAK8iQpyNb1k7NizMkzpPZ+7Do5EJoNKD2XFuUUw/GaKPqf/0PvJXaR82ycz6tjiMU4O6YZYAZDRSmtg==
  • Cc: Xenia Ragiadakou <xenia.ragiadakou@xxxxxxx>, Ard Biesheuvel <ardb+tianocore@xxxxxxxxxx>, Jiewen Yao <jiewen.yao@xxxxxxxxx>, Jordan Justen <jordan.l.justen@xxxxxxxxx>, Gerd Hoffmann <kraxel@xxxxxxxxxx>, "Anthony Perard" <anthony.perard@xxxxxxxxxx>, Julien Grall <julien@xxxxxxx>, <xen-devel@xxxxxxxxxxxxxxxxxxxx>
  • Delivery-date: Thu, 13 Jul 2023 10:48:31 +0000
  • List-id: Xen developer discussion <xen-devel.lists.xenproject.org>

Currently, resuming an S3 suspended guest results in the following
assertion failure:
ASSERT 
MdePkg/Library/PeiResourcePublicationLib/PeiResourcePublicationLib.c(41): 
MemoryLength > 0
This happens because some parts of the S3 suspend and resume paths
are missing in order for S3 to work. For instance, the variables
mS3AcpiReservedMemoryBase and mS3AcpiReservedMemoryBase are not
initialized, regions that are used on S3 resume are either missing
or not marked as ACPI NVS memory and can be corrupted by the OS.
This patch adds the missing parts based heavily on the existing S3
implementation of other virtual platforms.

For S3 support, the provision of fw_cfg is required in order for
suspend states to be retrieved.

Another issue noticed is that when CalibrateLapicTimer() is called
on S3 resume path, the shared info page is remapped to a different
guest physical address. This remapping happens under guest's feet,
so any subsequent attempt of the guest to access the shared info
page results in nested page faults. This patch removes any local
APIC timer initializion and calibration from S3 resume path.

Signed-off-by: Xenia Ragiadakou <xenia.ragiadakou@xxxxxxx>
---
 OvmfPkg/XenPlatformPei/Fv.c               |  2 +-
 OvmfPkg/XenPlatformPei/MemDetect.c        | 60 ++++++++++++++++++++++-
 OvmfPkg/XenPlatformPei/Platform.c         | 11 ++++-
 OvmfPkg/XenPlatformPei/Platform.h         |  2 +
 OvmfPkg/XenPlatformPei/XenPlatformPei.inf |  7 +++
 5 files changed, 78 insertions(+), 4 deletions(-)

diff --git a/OvmfPkg/XenPlatformPei/Fv.c b/OvmfPkg/XenPlatformPei/Fv.c
index 871a2c1c5b..37ecb3cfee 100644
--- a/OvmfPkg/XenPlatformPei/Fv.c
+++ b/OvmfPkg/XenPlatformPei/Fv.c
@@ -37,7 +37,7 @@ PeiFvInitialization (
   BuildMemoryAllocationHob (
     PcdGet32 (PcdOvmfPeiMemFvBase),
     PcdGet32 (PcdOvmfPeiMemFvSize),
-    EfiBootServicesData
+    mS3Supported ? EfiACPIMemoryNVS : EfiBootServicesData
     );
 
   //
diff --git a/OvmfPkg/XenPlatformPei/MemDetect.c 
b/OvmfPkg/XenPlatformPei/MemDetect.c
index e552e7a55e..1724a4988f 100644
--- a/OvmfPkg/XenPlatformPei/MemDetect.c
+++ b/OvmfPkg/XenPlatformPei/MemDetect.c
@@ -283,6 +283,19 @@ PublishPeiMemory (
 
   LowerMemorySize = GetSystemMemorySizeBelow4gb ();
 
+  //
+  // If S3 is supported, then the S3 permanent PEI memory is placed next,
+  // downwards. Its size is primarily dictated by CpuMpPei. The formula below
+  // is an approximation.
+  //
+  if (mS3Supported) {
+    mS3AcpiReservedMemorySize = SIZE_512KB +
+                                PcdGet32 (PcdCpuMaxLogicalProcessorNumber) *
+                                PcdGet32 (PcdCpuApStackSize);
+    mS3AcpiReservedMemoryBase = LowerMemorySize - mS3AcpiReservedMemorySize;
+    LowerMemorySize           = mS3AcpiReservedMemoryBase;
+  }
+
   if (mBootMode == BOOT_ON_S3_RESUME) {
     MemoryBase = mS3AcpiReservedMemoryBase;
     MemorySize = mS3AcpiReservedMemorySize;
@@ -328,6 +341,51 @@ InitializeRamRegions (
 {
   XenPublishRamRegions ();
 
+  if (mS3Supported && (mBootMode != BOOT_ON_S3_RESUME)) {
+    //
+    // This is the memory range that will be used for PEI on S3 resume
+    //
+    BuildMemoryAllocationHob (
+      mS3AcpiReservedMemoryBase,
+      mS3AcpiReservedMemorySize,
+      EfiACPIMemoryNVS
+      );
+
+    //
+    // Cover the initial RAM area used as stack and temporary PEI heap.
+    //
+    // This is reserved as ACPI NVS so it can be used on S3 resume.
+    //
+    BuildMemoryAllocationHob (
+      PcdGet32 (PcdOvmfSecPeiTempRamBase),
+      PcdGet32 (PcdOvmfSecPeiTempRamSize),
+      EfiACPIMemoryNVS
+      );
+
+    //
+    // SEC stores its table of GUIDed section handlers here.
+    //
+    BuildMemoryAllocationHob (
+      PcdGet64 (PcdGuidedExtractHandlerTableAddress),
+      PcdGet32 (PcdGuidedExtractHandlerTableSize),
+      EfiACPIMemoryNVS
+      );
+
+ #ifdef MDE_CPU_X64
+    //
+    // Reserve the initial page tables built by the reset vector code.
+    //
+    // Since this memory range will be used by the Reset Vector on S3
+    // resume, it must be reserved as ACPI NVS.
+    //
+    BuildMemoryAllocationHob (
+      (EFI_PHYSICAL_ADDRESS)(UINTN)PcdGet32 (PcdOvmfSecPageTablesBase),
+      (UINT64)(UINTN)PcdGet32 (PcdOvmfSecPageTablesSize),
+      EfiACPIMemoryNVS
+      );
+ #endif
+  }
+
   if (mBootMode != BOOT_ON_S3_RESUME) {
     //
     // Reserve the lock box storage area
@@ -346,7 +404,7 @@ InitializeRamRegions (
     BuildMemoryAllocationHob (
       (EFI_PHYSICAL_ADDRESS)(UINTN)PcdGet32 (PcdOvmfLockBoxStorageBase),
       (UINT64)(UINTN)PcdGet32 (PcdOvmfLockBoxStorageSize),
-      EfiBootServicesData
+      mS3Supported ? EfiACPIMemoryNVS : EfiBootServicesData
       );
   }
 }
diff --git a/OvmfPkg/XenPlatformPei/Platform.c 
b/OvmfPkg/XenPlatformPei/Platform.c
index c3fdf3d0b8..1b074cff33 100644
--- a/OvmfPkg/XenPlatformPei/Platform.c
+++ b/OvmfPkg/XenPlatformPei/Platform.c
@@ -60,6 +60,8 @@ UINT16  mHostBridgeDevId;
 
 EFI_BOOT_MODE  mBootMode = BOOT_WITH_FULL_CONFIGURATION;
 
+BOOLEAN  mS3Supported = FALSE;
+
 VOID
 AddIoMemoryBaseSizeHob (
   EFI_PHYSICAL_ADDRESS  MemoryBase,
@@ -350,6 +352,11 @@ BootModeInitialization (
 
   if (CmosRead8 (0xF) == 0xFE) {
     mBootMode = BOOT_ON_S3_RESUME;
+    if (!mS3Supported) {
+      DEBUG ((DEBUG_ERROR, "ERROR: S3 not supported\n"));
+      ASSERT (FALSE);
+      CpuDeadLoop ();
+    }
   }
 
   CmosWrite8 (0xF, 0x00);
@@ -463,6 +470,7 @@ InitializeXenPlatform (
   //
   if (QemuFwCfgS3Enabled ()) {
     DEBUG ((DEBUG_INFO, "S3 support was detected on QEMU\n"));
+    mS3Supported = TRUE;
     Status = PcdSetBoolS (PcdAcpiS3Enable, TRUE);
     ASSERT_EFI_ERROR (Status);
   }
@@ -481,9 +489,8 @@ InitializeXenPlatform (
 
   InitializeRamRegions ();
 
-  CalibrateLapicTimer ();
-
   if (mBootMode != BOOT_ON_S3_RESUME) {
+    CalibrateLapicTimer ();
     ReserveEmuVariableNvStore ();
     PeiFvInitialization ();
     MemMapInitialization ();
diff --git a/OvmfPkg/XenPlatformPei/Platform.h 
b/OvmfPkg/XenPlatformPei/Platform.h
index 7b4de128e7..fda66747cc 100644
--- a/OvmfPkg/XenPlatformPei/Platform.h
+++ b/OvmfPkg/XenPlatformPei/Platform.h
@@ -139,4 +139,6 @@ extern UINT8  mPhysMemAddressWidth;
 
 extern UINT16  mHostBridgeDevId;
 
+extern BOOLEAN  mS3Supported;
+
 #endif // _PLATFORM_PEI_H_INCLUDED_
diff --git a/OvmfPkg/XenPlatformPei/XenPlatformPei.inf 
b/OvmfPkg/XenPlatformPei/XenPlatformPei.inf
index 20c27ff34b..a359cf60ca 100644
--- a/OvmfPkg/XenPlatformPei/XenPlatformPei.inf
+++ b/OvmfPkg/XenPlatformPei/XenPlatformPei.inf
@@ -69,9 +69,13 @@
   gUefiOvmfPkgTokenSpaceGuid.PcdOvmfPeiMemFvSize
   gUefiOvmfPkgTokenSpaceGuid.PcdOvmfDxeMemFvBase
   gUefiOvmfPkgTokenSpaceGuid.PcdOvmfDxeMemFvSize
+  gUefiOvmfPkgTokenSpaceGuid.PcdOvmfSecPeiTempRamBase
+  gUefiOvmfPkgTokenSpaceGuid.PcdOvmfSecPeiTempRamSize
   gUefiOvmfPkgTokenSpaceGuid.PcdOvmfSecPageTablesBase
+  gUefiOvmfPkgTokenSpaceGuid.PcdOvmfSecPageTablesSize
   gUefiOvmfPkgTokenSpaceGuid.PcdOvmfLockBoxStorageBase
   gUefiOvmfPkgTokenSpaceGuid.PcdOvmfLockBoxStorageSize
+  gUefiOvmfPkgTokenSpaceGuid.PcdGuidedExtractHandlerTableSize
   gUefiOvmfPkgTokenSpaceGuid.PcdOvmfHostBridgePciDevId
   gUefiOvmfPkgTokenSpaceGuid.PcdPciIoBase
   gUefiOvmfPkgTokenSpaceGuid.PcdPciIoSize
@@ -80,6 +84,7 @@
   gUefiOvmfPkgTokenSpaceGuid.PcdPciMmio64Base
   gUefiOvmfPkgTokenSpaceGuid.PcdPciMmio64Size
   gUefiOvmfPkgTokenSpaceGuid.PcdQ35TsegMbytes
+  gEfiMdePkgTokenSpaceGuid.PcdGuidedExtractHandlerTableAddress
   gEfiMdeModulePkgTokenSpaceGuid.PcdAcpiS3Enable
   gEfiMdeModulePkgTokenSpaceGuid.PcdFlashNvStorageFtwSpareSize
   gEfiMdeModulePkgTokenSpaceGuid.PcdEmuVariableNvStoreReserved
@@ -89,6 +94,8 @@
   gEfiMdePkgTokenSpaceGuid.PcdFSBClock
   gEfiSecurityPkgTokenSpaceGuid.PcdOptionRomImageVerificationPolicy
   gUefiCpuPkgTokenSpaceGuid.PcdCpuLocalApicBaseAddress
+  gUefiCpuPkgTokenSpaceGuid.PcdCpuApStackSize
+  gUefiCpuPkgTokenSpaceGuid.PcdCpuMaxLogicalProcessorNumber
 
   gUefiOvmfPkgTokenSpaceGuid.PcdXenPvhStartOfDayStructPtr
   gUefiOvmfPkgTokenSpaceGuid.PcdXenPvhStartOfDayStructPtrSize
-- 
2.34.1




 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.