[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH] xen: Allow platform PCI interrupt to be shared


  • To: David Woodhouse <dwmw2@xxxxxxxxxxxxx>, Juergen Gross <jgross@xxxxxxxx>, Stefano Stabellini <sstabellini@xxxxxxxxxx>, xen-devel <xen-devel@xxxxxxxxxxxxxxxxxxxx>, "linux-kernel@xxxxxxxxxxxxxxx" <linux-kernel@xxxxxxxxxxxxxxx>, Thomas Gleixner <tglx@xxxxxxxxxxxxx>, Paul Durrant <paul@xxxxxxx>
  • From: Andrew Cooper <Andrew.Cooper3@xxxxxxxxxx>
  • Date: Wed, 18 Jan 2023 14:39:34 +0000
  • Accept-language: en-GB, en-US
  • Arc-authentication-results: i=1; mx.microsoft.com 1; spf=pass smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com; dkim=pass header.d=citrix.com; arc=none
  • Arc-message-signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=61deY8fdjKe37EO7vaHkGmNrWpN4ki2JTcLxXAhuSh0=; b=Xogm+ldjUb2QajbICSw5eu6/rScHnjazDs+1yK8pFl1xMo1QjT11Y2mMR2PgoDdOKxORmj4L/o3aWdBRv1ugsaMS/iFd0wqqahe3kWl8aDCtfS/8ueAkwp8pjHAjADPr/JS6el1vMRCziI1Okvg5zJ0ngN9YxTnD/hxGcAf6BBLuTokQXLn1+x3QbHDAdRvvzofAD1SXxGL4SZOQ1skWJsiCmIjP0NrMxBISYxY5+LQ37sRovdDTlt3H7yrelGUu7pnbL6YQow7SaWYbxlvAJQ4LzccU9zq/h+Hpc18W6oIzQrHFXhP6+BUppx52fPdNcmpq5SQrZkv395iJtsG1Hw==
  • Arc-seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=cYtObBzAErfhzWLRoU/BRdEM2HGB2Fby/gdhEfNYAvzhIc7NfOPMzi9sXjboG8RjE1MmtJif5iXLNNhk7l+IG9I0BzrXAsU2zO8a5cLQya37nnUrmrHiX47ovmiUp6c+8nBuoxD6FNwEL0792S1TJGyAxXdQ9yB3f0kw4wfgmqL/STv9G49Ax4PYGiXT49VC5VAHgl0nFirepJTnZqb6WyCfTRFiejnL1jC+7YmqphAtXZOOYJLquFOGdOfA/wmQRe28dWtBvrpKgOJDPrQsBRitG5CpkPsI1hM3lgSeGOIbRYgebnd0luYFBLS3paJqvmz9XkybLtdfNErFV63oGA==
  • Authentication-results: dkim=none (message not signed) header.d=none;dmarc=none action=none header.from=citrix.com;
  • Delivery-date: Wed, 18 Jan 2023 14:40:11 +0000
  • Ironport-data: A9a23:F944uK5k8YLCGQY26KnZNQxRtLvGchMFZxGqfqrLsTDasY5as4F+v mYaUWDQOPqJNGHwfI12PInn8k0FvpXTzd9lTVdu/is3Hi5G8cbLO4+Ufxz6V8+wwm8vb2o8t plDNYOQRCwQZiWBzvt4GuG59RGQ7YnRGvynTraBYnoqLeNdYH9JoQp5nOIkiZJfj9G8Agec0 fv/uMSaM1K+s9JOGjt8B5mr9VU+45wehBtC5gZlPakS7AeC/5UoJMl3yZ+ZfiOQrrZ8RoZWd 86bpJml82XQ+QsaC9/Nut4XpWVTH9Y+lSDX4pZnc/DKbipq/0Te4Y5iXBYoUm9Fii3hojxE4 I4lWapc6+seFvakdOw1C3G0GszlVEFM0OevzXOX6aR/w6BaGpdFLjoH4EweZOUlFuhL7W5m0 sAaMS5SYRC/qM3v/aiUcdkyoe0EM5y+VG8fkikIITDxK98DGMqGaYOaoNhS0XE3m9xEGuvYa 4wBcz1zYR/cYhpJfFAKFJY5m+TujX76G9FagAvN+exrvC6OkUooj+CF3Nn9I7RmQe18mEqCq 32A1GP+GhwAb/SUyCaf82LqjejK9c/+cNNPSubhrq806LGV7nUwIhsXUWOpmKiWj2W+Z+lRD k0w/BN7+MDe82TuFLERRSaQrGGBoUQ0WtxeCeQ25QiBjK3O7G6xBGceSSVaQMc7r8JwTjsvv neShM/gDzFrtLyTSFqe+62SoDf0PjIaRUcSaClBQQYb7t3LpIAokgmJXttlCLSyjND+BXf32 T/ihCw/gagDyM0GzaO2+XjZjD+24JvEVAg44kPQRG3Nxh92YJ6NY42u9ETB6vBBPMCVQzGpp HEZn+CO4eZICouC/BFhW80IFbCtovyDYDvVhAc1G4F7rmv1vXm+YYpX/TdyYl9zNdoJciPoZ 0mVvh5N4JhUPz2haqofj5+NNvnGBJPITbzNPs04pPIXCnStXGdrJB1TWHM=
  • Ironport-hdrordr: A9a23:8+eV+6qgFgLfXwr+u7ODOZAaV5o6eYIsimQD101hICG9E/b0qy nKpp9w6faaskdzZJheo6HjBEDtex3hHP1OjbX5X43DYOCOggLBEGgI1+TfKlPbehEW/9QtsJ tdTw==
  • List-id: Xen developer discussion <xen-devel.lists.xenproject.org>
  • Thread-index: AQHZKze3xTuOiK3Or0K7Zn5gBYHKJq6kMguAgAADsQCAAAR6gIAAASeAgAADj4A=
  • Thread-topic: [PATCH] xen: Allow platform PCI interrupt to be shared

On 18/01/2023 2:26 pm, David Woodhouse wrote:
> On Wed, 2023-01-18 at 14:22 +0000, Andrew Cooper wrote:
>> On 18/01/2023 2:06 pm, David Woodhouse wrote:
>>> On Wed, 2023-01-18 at 13:53 +0000, Andrew Cooper wrote:
>>>> On 18/01/2023 12:22 pm, David Woodhouse wrote:
>>>>> Signed-off-by: David Woodhouse <dwmw@xxxxxxxxxxxx>
>>>>> ---
>>>>> What does xen_evtchn_do_upcall() exist for? Can we delete it? I don't
>>>>> see it being called anywhere.
>>>> Seems the caller was dropped by
>>>> cb09ea2924cbf1a42da59bd30a59cc1836240bcb, but the CONFIG_PVHVM looks
>>>> bogus because the precondition to setting it up was being in a Xen HVM
>>>> guest, and the guest is taking evtchns by vector either way.
>>>>
>>>> PV guests use the entrypoint called exc_xen_hypervisor_callback which
>>>> really ought to gain a PV in its name somewhere.  Also the comments look
>>>> distinctly suspect.
>>> Yeah. I couldn't *see* any asm or macro magic which would reference
>>> xen_evtchn_do_upcall, and removing it from my build (with CONFIG_XEN_PV
>>> enabled) also didn't break anything.
>>>
>>>> Some tidying in this area would be valuable.
>>> Indeed. I just need Paul or myself to throw in a basic XenStore
>>> implementation so we can provide a PV disk, and I should be able to do
>>> quickfire testing of PV guests too with 'qemu -kernel' and a PV shim.
>>>
>>> PVHVM would be an entertaining thing to support too; I suppose that's
>>> mostly a case of basing it on the microvm qemu platform, or perhaps
>>> even *more* minimal x86-based platform?
>> There is no actual thing called PVHVM.  That diagram has caused far more
>> damage than good...
> Perhaps so. Even CONFIG_XEN_PVHVM in the kernel is a nonsense, because
> it's just automatically set based on (XEN && X86_LOCAL_APIC). And
> CONFIG_XEN depends on X86_LOCAL_APIC anyway.
>
> Which is why isn't never mattered that the vector callback handling was
> under #ifdef CONFIG_XEN_PVHVM not just CONFIG_XEN.
>
>> There's HVM (and by this, I mean the hypervisor's interpretation meaning
>> VT-x or SVM), and a spectrum of things the guest kernel can do if it
>> desires.
>>
>> I'm pretty sure Linux knows all of them.
> But don't we want to refrain from providing the legacy PC platform devices?

That also exists and works fine (and is one slice on the spectrum).  KVM
even borrowed our PVH boot API because we'd already done the hard work
in Linux.

~Andrew

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.