[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH] xen/arm: fix SBDF calculation for vPCI MMIO handlers


  • To: Oleksandr Andrushchenko <Oleksandr_Andrushchenko@xxxxxxxx>
  • From: Roger Pau Monné <roger.pau@xxxxxxxxxx>
  • Date: Thu, 28 Oct 2021 18:03:20 +0200
  • Arc-authentication-results: i=1; mx.microsoft.com 1; spf=pass smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com; dkim=pass header.d=citrix.com; arc=none
  • Arc-message-signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=pp1YNTGO7VSpDzDXzb4cjwnAwSTAIA/DAPMZwjgNhkM=; b=J+Yc34nmP/ZTsTUAzmaQG4ogD2+cNNOtaaMCblEBQBZapQ2Uw7MCmK7YXEG8nrvjovVA2JxrCkHO4C4fNjY2xazG+EgThL7PP2I93tkaN8TsM3BzJeNf0RNgvkANgMrh0HqVArKEZOGIiNe0JDSqNAZnn/AKoF3eo0L6EOIFCcdHVpdDIhzQyAw3DG/UbU7BH8bDfj3oPz1roi97gdcuojHKbprdALSyAZSAIBGkquedIbXXoVU3K4p+PuqydCUaNK7CxR4zosd7OYOzsHqBfho/uyvLbO1B/n0oOJZ4PcT9tMpT+Wen1ZtrQcjQeVBdY5DYej33CAHIvO7WHujJKA==
  • Arc-seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=YpXOSbhBZcb9qrdMhJiRrMQU/rrq1cpgki5ZmgTJwUl2FCJSnJF0XkGGBCbIQHuPXOphEWH/C+43OGc5Iq2wavcUgmRDnW/Q72ZleVF0QsA1y9SY8TPhiYUWuwi7nqdv5TSwea1wm5b/c5aO8dDge5SHIXXgTIY9vP6Nd7c//7tsGavoDySkSCQhqBgIeNHlxuugxGe5iUoxUqXkoahgldHRRaRhTgvdKuj2QBI0eqMLs/jGZdnWJe6lAqE90LsS8VSUl3cQp4sA/2LDJ03MDr553SFfpcVHyGwXNKeDKX0fGRYlImPs6qidmr6MiCS7/SaiMOo5xcVSFxktQV7pMg==
  • Authentication-results: esa2.hc3370-68.iphmx.com; dkim=pass (signature verified) header.i=@citrix.onmicrosoft.com
  • Cc: Julien Grall <julien@xxxxxxx>, "xen-devel@xxxxxxxxxxxxxxxxxxxx" <xen-devel@xxxxxxxxxxxxxxxxxxxx>, "sstabellini@xxxxxxxxxx" <sstabellini@xxxxxxxxxx>, "iwj@xxxxxxxxxxxxxx" <iwj@xxxxxxxxxxxxxx>, Bertrand Marquis <bertrand.marquis@xxxxxxx>, Rahul Singh <rahul.singh@xxxxxxx>
  • Delivery-date: Thu, 28 Oct 2021 16:03:47 +0000
  • Ironport-data: A9a23:sOTDLKqBJgYH91h7UGzKYZ8EC4VeBmJTYxIvgKrLsJaIsI4StFCzt garIBnQbqrcYWHxe9lwbIm09hkCvZ7XztYxHFBu/C1jHy4SoJuZCYyVIHmrMnLJJKUvbq7GA +byyDXkBJppJpMJjk71atANlZT4vE2xbuKU5NTsY0idfic5Dnd+4f5fs7Rh2Ncx2YLpW1nlV e7a+KUzBnf0g1aYDUpMg06zgEsHUCPa4W5wUvQWPJinjXeG/5UnJMt3yZKZdhMUdrJ8DO+iL 9sv+Znilo/vE7XBPfv++lrzWhVirrc/pmFigFIOM0SpqkAqSiDfTs/XnRfTAKtao2zhojx/9 DlCnYOPEC4rHoPhpNReTkl9EGIkBrB34bCSdBBTseTLp6HHW37lwvEoB0AqJ4wIvO1wBAmi9 9RBdmpLNErawbvrnvTrEYGAhex6RCXvFJkYtXx6iynQEN4tQIzZQrWM7thdtNs1rp0QQKmGP JdGAdZpRCTuSUR3PUs9M6J9g6D4212vU2QB9WvA8MLb5ECMlVcsgdABKuH9e8OIbdVYmF6Co WDL9Hi/BQsVXPSVxCCZ6HuqiqnKlDniRYMJPLSi87hhh1j77nYaCAASE0C6p/a5okekXpRUL El80jE1saE4+UivT9/8dx61uniJulgbQdU4O8o+5QKWw6zY+TGwAGQeUyVBY9wrsswxbTEy3 1rPlNTsbRRtrbmURHS15rqS6zSoNkA9PWIEICMJUwYBy93iu50oyALCSM55F6y4hcGzHiv/q w1mtwBn2e9V15RSkfzmoxaX2FpAu6QlUCYq2Vz+XEKLwz9JJ5P1Qo/rs2PG3O9PedPxoka6g FAInM2X7eYrBJ6LlTCQTOhlIIxF98ppIxWH3gYxR8hJGyCFvif5J9gJsW4WyFJBa55cIVfUj FnvVRS9DXO5FECharNreMqPAsAuwLmI+T/NB62MMIQmjnSccma6EMBSiay4gz+FfKsEy/hX1 XKnnSCEVi9y5UNPl2Peegvl+eV3rh3SPEuKLXwB8zyp0KCFeFmeQqofPV2FY4gRtf3f/V6Lq 4wEaZral32ztdEShAGMqOb/ynhRdBAG6W3e8ZQLJoZv3CI/QAnN9MM9MZt+Itc4zsy5Z8/D/ 22nW18w9bYMrSavFOl+UVg6MOmHdc8m9RoTZHVwVX71iylLSdv+t883KspoFYTLAcQ+lJaYu dFeIJ7eahmOIxyakwkggW7V990/Kk7z31vSb0JIolEXJvZdeuAAwfe9FiPH/ygSFCun88w4p ryrzATARpQfAQ9lCa7rhDiHkgjZUaE1lL0gUk3WDMNUfUmwooFmJzao1q08It0WKAWFzTyfj l7EDRAdrOjLgok07NiW2vzU89b3S7NzThhAAm3WzbeqLi2GrGCt9pBNDbSTdjfHWWKqpKj7P bdJz+vxOeEslUpRt9YuCK5iyK8zvoO9p7JTwgl+Mm/MalCnVuFpLnWchJEdvaxR3L5J/wCxX xvXqNVdPLyIPuLjEUIQe1V5PrjSi6lMl2CLv/ovIUj86Ctmx5a9UB1fb0uWlShQDLppK4d5k +0vj9Ebtl6kgR0wP9fY0i0NrzaQLmYNWrkMv40BBNO5kRIiz1xPbMCOCiLy553TOdxAPlNzf 22RjavGwb9d2lDDYzw4En2UhbhRgpEHuRZryl4eJgvWxoqZ16FvhBABoy4qSgl1zwlc17MhM 2dmAER5OKGS8mo6n8NERW2tR1lMCRDxFpYdELfVeLk1l3WVa1E=
  • Ironport-hdrordr: A9a23:ul2GT61hBhQ1t1ZwArHqHwqjBShyeYIsimQD101hICG9Lfb2qy n+ppgmPEHP5Qr5OEtApTiBUJPwJk800aQFm7X5Wo3SITUO2VHYV72KiLGN/9SOIVydygcw79 YET0E6MqyNMbEYt7eK3ODbKadY/DDvysnB7o2/vhQdPT2CKZsQlzuRYjzrbHGeLzM2Y6bReq Dsgvau8FGbCAsqh4mAdzM4dtmGg+eOuIPtYBYACRJiwA6SjQmw4Lq/NxSDxB8RXx5G3L9nqA H+4kDEz5Tml8v+5g7X1mfV4ZgTsNz9yuFbDMjJrsQOMD3jhiuheYwkcbyfuzIepv2p9T8R4Z TxiiZlG/42x2Laf2mzrxeo8w780Aw243un8lOciWuLm72xeBsKT+56wa5JeBrQ7EQt+Ptm1r hQ4m6fv51LSTvdgSXU/bHzJlBXv3vxhUBnvf8YjnRZX4dbQqRWt5Yj8ERcF4pFND7m6bogDP JlAKjnlbhrmGuhHjPkV1RUsZ6RtixZJGbCfqFCgL3b79FupgE486NCr/Zv2kvp9/oGOu95Dq r/Q+NVfYp1P70rhJRGdZA8qPuMex/wqC33QRevyHTcZek60iH22tXKCItc3pDfRHVP9up1pK j8
  • Ironport-sdr: bgvn3qQYvAt+QNLMLRsBLPnkJc/SoQAL5VGe5uZHPt3qCalmF90qKeWvJu3MAgu984UB35S/Mj ZreV3XibYCrxA5JzYNJAWD9GmiJNcd10lr7od2wkLs6+5znTGmJgOSRLrOnO/2oVBzGbTGI4Ae H6bhxHzPzlBdheEUHyg3nFNXXNox0xs2pTzL65wpBoutUqVinloQCNrDbFdEofZZ3xXcfUGqKT 81v7/9GDMlCeb5n++CPUma9zJY9Gptss6mMBA05HjyxMnr75FrghIH7ukpxdp7zMTJM9LHmNRp GgLh51rKeyRqIAtFAY+pK3Rq
  • List-id: Xen developer discussion <xen-devel.lists.xenproject.org>

On Thu, Oct 28, 2021 at 02:23:34PM +0000, Oleksandr Andrushchenko wrote:
> 
> 
> On 28.10.21 16:36, Roger Pau Monné wrote:
> > On Thu, Oct 28, 2021 at 12:09:23PM +0000, Oleksandr Andrushchenko wrote:
> >> Hi, Julien!
> >>
> >> On 27.10.21 20:35, Julien Grall wrote:
> >>> Hi Oleksandr,
> >>>
> >>> On 27/10/2021 09:25, Oleksandr Andrushchenko wrote:
> >>>> From: Oleksandr Andrushchenko <oleksandr_andrushchenko@xxxxxxxx>
> >>>>
> >>>> While in vPCI MMIO trap handlers for the guest PCI host bridge it is not
> >>>> enough for SBDF translation to simply call VPCI_ECAM_BDF(info->gpa) as
> >>>> the base address may not be aligned in the way that the translation
> >>>> always work. If not adjusted with respect to the base address it may not 
> >>>> be
> >>>> able to properly convert SBDF and crashes:
> >>>>
> >>>> (XEN) vpci_mmio_read 0000:65:1a.0 reg 8bc gpa e65d08bc
> >>> I can't find a printk() that may output this message. Where does this 
> >>> comes from?
> >> That was a debug print. I shouldn't have used that in the patch 
> >> description, but
> >> probably after "---" to better explain what's happening
> >>> Anyway, IIUC the guest physical address is 0xe65d08bc which, if I am not 
> >>> mistaken, doesn't belong to the range advertised for GUEST_VPCI_ECAM.
> >> This is from dom0 I am working on now.
> >>> IMHO, the stack trace should come from usptream Xen or need some 
> >>> information to explain how this was reproduced.
> >>>
> >>>> (XEN) Data Abort Trap. Syndrome=0x6
> >>>> (XEN) Walking Hypervisor VA 0x467a28bc on CPU0 via TTBR 
> >>>> 0x00000000481d5000
> >>> I can understnad that if we don't substract GUEST_VPCI_ECAM, we would (in 
> >>> theory) not get the correct BDF. But... I don't understand how this would 
> >>> result to a data abort in the hypervisor.
> >>>
> >>> In fact, I think the vPCI code should be resilient enough to not crash if 
> >>> we pass the wrong BDF.
> >> Well, there is no (?) easy way to validate SBDF. And this could be a 
> >> problem if we have a misbehaving
> >> guest which may force Xen to access the memory beyond that of PCI host 
> >> bridge
> > How could that be? The ECAM region exposed to the guest you should be
> > the same as the physical one for dom0?
> Ok, I have a Designware PCI hist which has 2 ECAM regions (I am starting to
> implement the driver for it, so I can be wrong here):
> - Root Complex ECAM area ("dbi"), it is something like 0x3000 bytes long
> - "Client" ECAM area ("config")
> So from Dom0 POV we have 2 ECAM regions and for the guest
> we always emulate a single big region:

You need support for multiple ECAM regions. That's how we do it on x86
PVH dom0. See register_vpci_mmcfg_handler and related machinery.

> /*
>   * 256 MB is reserved for VPCI configuration space based on calculation
>   * 256 buses x 32 devices x 8 functions x 4 KB = 256 MB
>   */
> #define GUEST_VPCI_ECAM_BASE    xen_mk_ullong(0x10000000)
> #define GUEST_VPCI_ECAM_SIZE    xen_mk_ullong(0x10000000)
> 
> So, we have the base address and size of the emulated ECAM space
> not connected to the real host bridge
> >
> > And for domUs you really need to fix vpci_{read,write} to not
> > passthrough accesses not explicitly handled.
> Do you mean that we need to validate SBDFs there?
> This can be tricky if we have a use-case when a PCI device being
> passed through if not put at 0000:00:0.0, but requested to be, for
> example, 0000:0d:0.0. So, we need to go over the list of virtual
> devices and see if SBDF the guest is trying to access is a valid SBDF.
> Is this what you mean?

No, you need to prevent accesses to registers not explicitly handled
by vpci. Ie: do not forward unhandled accesses to
vpci_{read,wrie}_hw).

Regards, Roger.



 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.