[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

XSA-378 fixes breaking PVH Dom0 (was: [xen-4.15-testing test] 164495: regressions - FAIL)

  • To: xen-devel@xxxxxxxxxxxxxxxxxxxx
  • From: Jan Beulich <jbeulich@xxxxxxxx>
  • Date: Fri, 27 Aug 2021 15:29:52 +0200
  • Arc-authentication-results: i=1; mx.microsoft.com 1; spf=pass smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com; dkim=pass header.d=suse.com; arc=none
  • Arc-message-signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=IOlB61o+BYIQDaXxoJ+QZGDcmYkubFUPBmNhexBdjwg=; b=Z/nnlmdWbTe0MrL0FcRToG3WTea670H27D5mrjPc6jNghp4csAP18zj1n3M8dTQIN1IO93n8fx5ry2J/7+LvtfFcPzCjoe6FS1jlWjEQXh/l3ZBC29pKSBKt/fjGy+7a0iuD75XYz6OFbaJQVrwZAcOCXWN1boPG+TagFvwH4E8U3f7I8Ik1ff2V+Rz1URgYDeWGZwAiWwLGrxtkiR1M/N7cfPfdmiBejCggM0jnO89mL8g/mezl9BTb5qqAntwNDcv5hVGVBNu6mcQRJc08OPzWhXRBzOUHy44eCAQPXDo5UG48w5qXeXcajb8mqrrLlyoX9P+hnyytk0DOmN//Cw==
  • Arc-seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=hGDV30WX8RClxm27kp9LbKynHm3rn8ppkHTW50c5D8DlGQr34+N2trhDHdxOL0uqzO+P70yYnJtl/JNpBITR8QahG1J1I8GERTWPbQ+4Yi/xuIVMGmmglq4CsgN7Cu19XIv8SYhdPpGD0HEE5CoyDgTaWSnRCYc4NWcMs6UIEy1srurvZXj3ywI1VEquMLUTcjVE13Vb5DcCuI+LBdocn57UvyCszk0tsuuFYR6ECwLnX/wWSGau0yNbJ6rJFDfu2/8YFGXpFF2XzmoRxvug3pMafaVMwuw8V4O2f3C4+BionHLTHb5prKGFm6dE6fLxzg9RJz98PRTdz1DjaGii+w==
  • Authentication-results: xenproject.org; dkim=none (message not signed) header.d=none;xenproject.org; dmarc=none action=none header.from=suse.com;
  • Cc: osstest service owner <osstest-admin@xxxxxxxxxxxxxx>
  • Delivery-date: Fri, 27 Aug 2021 13:30:08 +0000
  • List-id: Xen developer discussion <xen-devel.lists.xenproject.org>

On 27.08.2021 08:52, osstest service owner wrote:
> flight 164495 xen-4.15-testing real [real]
> flight 164509 xen-4.15-testing real-retest [real]
> http://logs.test-lab.xenproject.org/osstest/logs/164495/
> http://logs.test-lab.xenproject.org/osstest/logs/164509/
> Regressions :-(
> Tests which did not succeed and are blocking,
> including tests which could not be run:
>  test-amd64-amd64-dom0pvh-xl-amd  8 xen-boot              fail REGR. vs. 
> 163759
>  test-amd64-amd64-dom0pvh-xl-intel  8 xen-boot            fail REGR. vs. 
> 163759

This is fallout from XSA-378. During Dom0 setup we first call
iommu_hwdom_init(), which maps reserved regions (p2m_access_rw).
Later map_mmio_regions() gets used to identity-map the low first
Mb. This, using set_mmio_p2m_entry(), establishes default-access
mappings (p2m_access_rwx).

Hence even if relaxing the logic in set_typed_p2m_entry() to

    if ( p2m_is_special(ot) )
        gfn_unlock(p2m, gfn, order);
        if ( mfn_eq(mfn, omfn) && gfn_p2mt == ot && access == a )
            return 0;
        return -EPERM;

we're still in trouble (because the two access types don't match)
when there is any reserved region below 1Mb.

One approach would be to avoid blindly mapping the low first Mb,
and to instead honor mappings which are already there. Or the
opposite - avoid mapping anything from arch_iommu_hwdom_init()
which is below 1Mb. (Other mappings down the call tree from
pvh_setup_acpi() imo would then also need adjusting, to avoid
redundant mapping attempts of space below 1Mb. At least RSDP is
known to possibly live there on various systems.)

Another approach could be to stop passing ->default_access from
set_mmio_p2m_entry() to set_typed_p2m_entry(). (And I think the
same should go for set_foreign_p2m_entry()). At the very least
right now it makes no sense at all to make RWX mappings there,
except when mapping PCI device ROMs. But of course reducing
permissions always comes with a (however large or small) risk of

While I think the latter aspect wants improving in any event,
right now I'm leaning towards the "opposite" variant of the
former. I'll draft a patch along these lines at least to see if
it helps, or if there is yet more fallout.




Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.