[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] [Xen-devel] [ovmf test] 58937: regressions - FAIL
flight 58937 ovmf real [real] http://logs.test-lab.xenproject.org/osstest/logs/58937/ Regressions :-( Tests which did not succeed and are blocking, including tests which could not be run: test-amd64-i386-xl-qemuu-win7-amd64 9 windows-install fail REGR. vs. 58919 Regressions which are regarded as allowable (not blocking): test-amd64-amd64-xl-qemuu-win7-amd64 16 guest-stop fail like 58919 version targeted for testing: ovmf 79d274b8b6b113248661c18f31c4be03c7da32de baseline version: ovmf 495ee9b85141dd9b65434d677b3a685fe166128d ------------------------------------------------------------ People who touched revisions under test: Laszlo Ersek <lersek@xxxxxxxxxx> Maoming <maoming.maoming@xxxxxxxxxx> Wei Liu <wei.liu2@xxxxxxxxxx> ------------------------------------------------------------ jobs: build-amd64-xsm pass build-i386-xsm pass build-amd64 pass build-i386 pass build-amd64-libvirt pass build-i386-libvirt pass build-amd64-pvops pass build-i386-pvops pass test-amd64-amd64-xl-qemuu-debianhvm-amd64-xsm pass test-amd64-i386-xl-qemuu-debianhvm-amd64-xsm pass test-amd64-i386-qemuu-rhel6hvm-amd pass test-amd64-amd64-xl-qemuu-debianhvm-amd64 pass test-amd64-i386-xl-qemuu-debianhvm-amd64 pass test-amd64-amd64-xl-qemuu-ovmf-amd64 pass test-amd64-i386-xl-qemuu-ovmf-amd64 pass test-amd64-amd64-xl-qemuu-win7-amd64 fail test-amd64-i386-xl-qemuu-win7-amd64 fail test-amd64-i386-qemuu-rhel6hvm-intel pass test-amd64-i386-xl-qemuu-winxpsp3-vcpus1 pass test-amd64-amd64-xl-qemuu-winxpsp3 pass test-amd64-i386-xl-qemuu-winxpsp3 pass ------------------------------------------------------------ sg-report-flight on osstest.test-lab.xenproject.org logs: /home/logs/logs images: /home/logs/images Logs, config files, etc. are available at http://logs.test-lab.xenproject.org/osstest/logs Test harness code can be found at http://xenbits.xen.org/gitweb?p=osstest.git;a=summary Not pushing. ------------------------------------------------------------ commit 79d274b8b6b113248661c18f31c4be03c7da32de Author: Laszlo Ersek <lersek@xxxxxxxxxx> Date: Fri Jun 26 16:09:52 2015 +0000 OvmfPkg: PlatformPei: invert MTRR setup in QemuInitializeRam() At the moment we work with a UC default MTRR type, and set three memory ranges to WB: - [0, 640 KB), - [1 MB, LowerMemorySize), - [4 GB, 4 GB + UpperMemorySize). Unfortunately, coverage for the third range can fail with a high likelihood. If the alignment of the base (ie. 4 GB) and the alignment of the size (UpperMemorySize) differ, then MtrrLib creates a series of variable MTRR entries, with power-of-two sized MTRR masks. And, it's really easy to run out of variable MTRR entries, dependent on the alignment difference. This is a problem because a Linux guest will loudly reject any high memory that is not covered my MTRR. So, let's follow the inverse pattern (loosely inspired by SeaBIOS): - flip the MTRR default type to WB, - set [0, 640 KB) to WB -- fixed MTRRs have precedence over the default type and variable MTRRs, so we can't avoid this, - set [640 KB, 1 MB) to UC -- implemented with fixed MTRRs, - set [LowerMemorySize, 4 GB) to UC -- should succeed with variable MTRRs more likely than the other scheme (due to less chaotic alignment differences). Effects of this patch can be observed by setting DEBUG_CACHE (0x00200000) in PcdDebugPrintErrorLevel. Cc: Maoming <maoming.maoming@xxxxxxxxxx> Cc: Huangpeng (Peter) <peter.huangpeng@xxxxxxxxxx> Cc: Wei Liu <wei.liu2@xxxxxxxxxx> Contributed-under: TianoCore Contribution Agreement 1.0 Signed-off-by: Laszlo Ersek <lersek@xxxxxxxxxx> Tested-by: Maoming <maoming.maoming@xxxxxxxxxx> Reviewed-by: Jordan Justen <jordan.l.justen@xxxxxxxxx> git-svn-id: https://svn.code.sf.net/p/edk2/code/trunk/edk2@17722 6f19259b-4bc3-4df7-8a09-765794883524 commit cfc80e2e95ee639e240c09eaeab76c0286bf917e Author: Laszlo Ersek <lersek@xxxxxxxxxx> Date: Fri Jun 26 16:09:48 2015 +0000 OvmfPkg: PlatformPei: beautify memory HOB order in QemuInitializeRam() Build the memory HOBs in a tight block, in increasing base address order. Cc: Maoming <maoming.maoming@xxxxxxxxxx> Cc: Huangpeng (Peter) <peter.huangpeng@xxxxxxxxxx> Cc: Wei Liu <wei.liu2@xxxxxxxxxx> Contributed-under: TianoCore Contribution Agreement 1.0 Signed-off-by: Laszlo Ersek <lersek@xxxxxxxxxx> Tested-by: Maoming <maoming.maoming@xxxxxxxxxx> Reviewed-by: Jordan Justen <jordan.l.justen@xxxxxxxxx> git-svn-id: https://svn.code.sf.net/p/edk2/code/trunk/edk2@17721 6f19259b-4bc3-4df7-8a09-765794883524 commit 86a14b0a7b89d5c301a167fa71ab765f56b878f0 Author: Laszlo Ersek <lersek@xxxxxxxxxx> Date: Fri Jun 26 16:09:43 2015 +0000 OvmfPkg: PlatformPei: create the CPU HOB with dynamic memory space width Maoming reported that guest memory sizes equal to or larger than 64GB were not correctly handled by OVMF. Enabling the DEBUG_GCD (0x00100000) bit in PcdDebugPrintErrorLevel, and starting QEMU with 64GB guest RAM size, I found the following error in the OVMF debug log: > GCD:AddMemorySpace(Base=0000000100000000,Length=0000000F40000000) > GcdMemoryType = Reserved > Capabilities = 030000000000000F > Status = Unsupported This message is emitted when the DXE core is initializing the memory space map, processing the "above 4GB" memory resource descriptor HOB that was created by OVMF's QemuInitializeRam() function (see "UpperMemorySize"). The DXE core's call chain fails in: CoreInternalAddMemorySpace() [MdeModulePkg/Core/Dxe/Gcd/Gcd.c] CoreConvertSpace() // // Search for the list of descriptors that cover the range BaseAddress // to BaseAddress+Length // CoreSearchGcdMapEntry() CoreSearchGcdMapEntry() fails because the one entry (with type "nonexistent") in the initial GCD memory space map is too small, and cannot be split to cover the memory space range being added: > GCD:Initial GCD Memory Space Map > GCDMemType Range Capabilities Attributes > ========== ================================= ================ ================ > NonExist 0000000000000000-0000000FFFFFFFFF 0000000000000000 0000000000000000 The size of this initial entry is determined from the CPU HOB (CoreInitializeGcdServices()). Set the SizeOfMemorySpace field in the CPU HOB to mPhysMemAddressWidth, which is the narrowest valid value to cover the entire guest RAM. Reported-by: Maoming <maoming.maoming@xxxxxxxxxx> Cc: Maoming <maoming.maoming@xxxxxxxxxx> Cc: Huangpeng (Peter) <peter.huangpeng@xxxxxxxxxx> Cc: Wei Liu <wei.liu2@xxxxxxxxxx> Contributed-under: TianoCore Contribution Agreement 1.0 Signed-off-by: Laszlo Ersek <lersek@xxxxxxxxxx> Tested-by: Wei Liu <wei.liu2@xxxxxxxxxx> Tested-by: Maoming <maoming.maoming@xxxxxxxxxx> Reviewed-by: Jordan Justen <jordan.l.justen@xxxxxxxxx> git-svn-id: https://svn.code.sf.net/p/edk2/code/trunk/edk2@17720 6f19259b-4bc3-4df7-8a09-765794883524 commit bc89fe4879012b3d8e0b8f7a73bc0b2c122db5ad Author: Laszlo Ersek <lersek@xxxxxxxxxx> Date: Fri Jun 26 16:09:39 2015 +0000 OvmfPkg: PlatformPei: enable larger permanent PEI RAM We'll soon increase the maximum guest-physical RAM size supported by OVMF. For more RAM, the DXE IPL is going to build more page tables, and for that it's going to need a bigger chunk from the permanent PEI RAM. Otherwise CreateIdentityMappingPageTables() would fail with: > DXE IPL Entry > Loading PEIM at 0x000BFF61000 EntryPoint=0x000BFF61260 DxeCore.efi > Loading DXE CORE at 0x000BFF61000 EntryPoint=0x000BFF61260 > AllocatePages failed: No 0x40201 Pages is available. > There is only left 0x3F1F pages memory resource to be allocated. > ASSERT .../MdeModulePkg/Core/DxeIplPeim/X64/VirtualMemory.c(123): > BigPageAddress != 0 (The above example belongs to the artificially high, maximal address width of 52, clamped by the DXE core to 48. The address width of 48 bits corresponds to 256 TB or RAM, and requires a bit more than 1GB for paging structures.) Cc: Maoming <maoming.maoming@xxxxxxxxxx> Cc: Huangpeng (Peter) <peter.huangpeng@xxxxxxxxxx> Cc: Wei Liu <wei.liu2@xxxxxxxxxx> Cc: Brian J. Johnson <bjohnson@xxxxxxx> Contributed-under: TianoCore Contribution Agreement 1.0 Signed-off-by: Laszlo Ersek <lersek@xxxxxxxxxx> Reviewed-by: Brian J. Johnson <bjohnson@xxxxxxx> Reviewed-by: Jordan Justen <jordan.l.justen@xxxxxxxxx> git-svn-id: https://svn.code.sf.net/p/edk2/code/trunk/edk2@17719 6f19259b-4bc3-4df7-8a09-765794883524 _______________________________________________ Xen-devel mailing list Xen-devel@xxxxxxxxxxxxx http://lists.xen.org/xen-devel
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |