[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [help] Xen 4.14.5 on Devuan 4.0 Chimaera, regression from Xen 4.0.1


  • To: Jan Beulich <jbeulich@xxxxxxxx>, Denis <tachyon_gun@xxxxxx>
  • From: Andrew Cooper <andrew.cooper3@xxxxxxxxxx>
  • Date: Mon, 13 Mar 2023 11:43:34 +0000
  • Arc-authentication-results: i=1; mx.microsoft.com 1; spf=pass smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com; dkim=pass header.d=citrix.com; arc=none
  • Arc-message-signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=UI8DyEVt9e18dqfx+Bar8rTPQ/40hRazabuFRjxd/a0=; b=VTy/STuRdH/OSpff9VBtqgQ/BOjvBRpHhHiofCFizrSMQ5bwD4CuEYKfXKMqjZzjOnMs3zHmwJCfDNVXipG19O0entVXdXtiWnVVKWazWz0NwXyW2dhfzK25dHmi8HApTsM4Sy6AVSqKCSyWZiIfAW/WfmYdBexvGeX8AFMmYly4cCZ9ChmrGveCEHR1URuIHg6i01EVcZSSwttmpqOl+56ImdxhT0LvSkvwIDiEejV3HKXz2Yjxxflz4c/UGdU56PFOOyInztjACbyzrpoaI/Pb3W8ViBSfSugqHPdY91JP6Xa1FWnxWZbE78t76UvralslFHgHJcBSTcqm2rTmdg==
  • Arc-seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=eoiBf70+zLf++ror5ziHWbcmG4ZRYASuYgAF6RTa2ytbH5kKzulgL8f04jF42bXRHxEAqcpQiYj2zwmtuFYTe9HEyW9pJBBEhQoL1iiuf0KSC3Su3CKI+yVytA5MjLx6fL6B4JMJcCa7P8fcfI55U6JUhYd4wydFkD4ZiLzHRwyO7DBJf3RHAFu4isNdwbHb5KKrU7zyQj1QoQZ9JcLm0ScwkqOQNsreeYIgmDFKtF6vOCGNcXxXck3QQiZMD4+q9Pac0J2uOWhu/VYbZLvc2gjnwmKXVLtNKv1/GSMuN2PUorpzvtqyZ/beKlu/Ym6v+sJUEvqvZQx/vGdZl5OyTQ==
  • Authentication-results: dkim=none (message not signed) header.d=none;dmarc=none action=none header.from=citrix.com;
  • Cc: Roger Pau Monné <roger.pau@xxxxxxxxxx>, xen-devel@xxxxxxxxxxxxxxxxxxxx
  • Delivery-date: Mon, 13 Mar 2023 11:44:03 +0000
  • Ironport-data: A9a23:ELt4MKkay5LICaKTrrURaEbo5gy0J0RdPkR7XQ2eYbSJt1+Wr1Gzt xIbW2zXbqmJYWr1KYoibd7n/ENX6MKEyoMwG1Rr/H81RCMWpZLJC+rCIxarNUt+DCFhoGFPt JxCN4aafKjYaleG+39B55C49SEUOZmgH+a6U6icfHgqH2eIcQ954Tp7gek1n4V0ttawBgKJq LvartbWfVSowFaYCEpNg064gE4p7aSaVA8w5ARkPqgQ5QSGzRH5MbpETU2PByqgKmVrNrbSq 9brlNmR4m7f9hExPdKp+p6TnpoiG+O60aCm0xK6aoD66vRwjnVaPpUTbZLwXXx/mTSR9+2d/ f0W3XCGpaXFCYWX8AgVe0Ew/yiTpsSq8pefSZS0mZT7I0Er7xIAahihZa07FdRwxwp5PY1B3 eIzLBUhSjHeu+2z/LOwVMdF3ssAIvC+aevzulk4pd3YJdAPZMmbBo/suppf1jp2gd1SF/HDY cZfcSBocBnLfxxIPBEQFY46m+CrwHL4dlW0qnrM/fZxvzeVk1Q3ieC8WDbWUoXiqcF9t0CUv G/ZuU/+BQkXLoe3wjuZ6HO8wOTImEsXXapLTOXkp6872AL7Kmo7KR4TD0bipqaFlWXkccB2a F0l1QwPsv1nnKCsZpynN/Gim1aGtBMBX9tbE8Uh9RqAjKHT5m6xGmEPTi9GbuspqckeWjEgk FOE9/v5CDoqvLCLRHa18raPsSj0KSUTNXUFZyIPUU0C+daLnW0ophfGT9ImGqjqiNTwQGn02 2rT9Hl4gKgPh8kW0an95UrAnz+nupnOSEgy+xnTWWWmqAh+YeZJerCV1LQS1t4YRK7xc7VLl ClsdxS2hAzWMaywqQ==
  • Ironport-hdrordr: A9a23:f6wL5aE7hJ/Je3v1pLqE7MeALOsnbusQ8zAXPhZKOHhom62j9/ xG885x6faZslwssRIb+OxoWpPufZqGz+8R3WB5B97LYOCBggaVxepZg7cKrQeNJ8VQnNQtsp uJ38JFeb7N5fkRt7eZ3DWF
  • List-id: Xen developer discussion <xen-devel.lists.xenproject.org>

On 13/03/2023 9:36 am, Jan Beulich wrote:
> On 10.03.2023 21:50, Denis wrote:
>> On 10.03.2023 09:51, Jan Beulich wrote:
>>> On 09.03.2023 21:37, Andrew Cooper wrote:
>>>> On 09/03/2023 7:34 pm, tachyon_gun@xxxxxx wrote:
>>>>> A short snippet of what I see when invoking "xl dmesg":
>>>>>  
>>>>> (XEN) No southbridge IO-APIC found in IVRS table
>>>>> (XEN) AMD-Vi: Error initialization
>>>>> (XEN) I/O virtualisation disabled 
>>>>>  
>>>>> What I would like to see (taken from Xen 4.0.1 running on Debian
>>>>> Squeeze, in use since 2011):
>>>>>  
>>>>> (XEN) IOAPIC[0]: apic_id 8, version 33, address 0xfec00000, GSI 0-23
>>>>> (XEN) Enabling APIC mode:  Flat.  Using 1 I/O APICs
>>>>> (XEN) Using scheduler: SMP Credit Scheduler (credit)
>>>>> (XEN) Detected 2611.936 MHz processor.
>>>>> (XEN) Initing memory sharing.
>>>>> (XEN) HVM: ASIDs enabled.
>>>>> (XEN) HVM: SVM enabled
>>>>> (XEN) HVM: Hardware Assisted Paging detected.
>>>>> (XEN) AMD-Vi: IOMMU 0 Enabled.
>>>>> (XEN) I/O virtualisation enabled
>>>>>  
>>>>> My question would be if this is "normal" behaviour due to older hardware
>>>>> being used with newer versions of Xen (compared to the old 4.0.1) or if
>>>>> this is a bug.
>>>>> If the latter, has this been addressed already in newer version (4.14+)?
>>> No, the code there is still the same. The commit introducing the check
>>> (06bbcaf48d09 ["AMD IOMMU: fail if there is no southbridge IO-APIC"])
>>> specifically provided for a workaround: "iommu=no-intremap" on the Xen
>>> command line. Could you give this a try? (As per below this could be
>>> what we want to do "automatically" in such a situation, i.e. without
>>> the need for a command line option. But you then still would face a
>>> perceived regression of interrupt remapping being disabled on such a
>>> system.)
>>>
>>> The other possible workaround, "iommu=no-amd-iommu-perdev-intremap",
>>> is something I rather wouldn't want to recommend, but you may still
>>> want to give it a try.
>>  
>> Thanks for your reply.
>>
>> I added the lines you suggested and it seems that "AMD-Vi: IOMMU 0" and
>> "I/O virtualisation" is enabled again.
> Good - that'll have to do as a workaround for the time being.

Not really.  Booting this system with no-intremap is still a regression
vs Xen 4.0.1

Disabling interrupt remapping on PCIe devices because we can't figure
out interrupt handling around the PCI bridge is still bad behaviour.


What we need to figure out here is how interrupts from the PCI bridge
actually work.  The IVRS table does contain records covering the devices
on the Southbridge, including the PCI bridge and it's entire subordinate
range.

MSI/MSI-X interrupts from the PCI devices will work fine (they'll have a
proper source id), so the only question is about line interrupts.  They
ought to appear with the bridge's source id, and ought to be fine too.


I see no evidence to suggest that the IVRS/MADT are incorrect or
incomplete.  Xen's believe that there must be a southbridge IO-APIC
special device seems to be the false entity here.

~Andrew



 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.