[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] Ethernet PCI passthrough problem


  • To: Frédéric Pierret <frederic.pierret@xxxxxxxxxxxx>
  • From: Jan Beulich <JBeulich@xxxxxxxx>
  • Date: Fri, 5 Jul 2019 14:24:51 +0000
  • Accept-language: en-US
  • Arc-authentication-results: i=1; test.office365.com 1;spf=none;dmarc=none;dkim=none;arc=none
  • Arc-message-signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=testarcselector01; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=UNGdZcoa7qmWLm8LKm2D33JddYOiEymQ/rMJhbVWIP0=; b=mfS9OuFoUCNdvH7RI11v3BJ5dpAWKtRYAKxnFetGi/TCy/HPPBLihzS8ZUmkMk5RiwLSUj3i8bUWTEx6J2HKyNgPID2bvsN59xRkbOpr7mIRC4Em50uDwJspC+3PgGHRAEzl4mUKVqbgEQNzC4xKxt5qCKtcCjMzatHbN6FcUb8=
  • Arc-seal: i=1; a=rsa-sha256; s=testarcselector01; d=microsoft.com; cv=none; b=gK6SaImQgv+Q7naBo4QY1CLF1hFEeN36Be3aGBOG8/dlDvFBaWMbMzkx+7PUeHcpB6axltPl1JHRAnXDTkacQzJubXuEBaIqzw8wgH+IsAy/KcfrfJUgCOLNZ5mLYmdXrJE6zpugq74lvB0tY4aamP6HnPenEdyWl1ikxN1NvDc=
  • Authentication-results: spf=none (sender IP is ) smtp.mailfrom=JBeulich@xxxxxxxx;
  • Cc: "xen-devel@xxxxxxxxxxxxxxxxxxxx" <xen-devel@xxxxxxxxxxxxxxxxxxxx>
  • Delivery-date: Fri, 05 Jul 2019 14:25:22 +0000
  • List-id: Xen developer discussion <xen-devel.lists.xenproject.org>
  • Thread-index: AQHVMzhj0mqXyFfZaUuqcdsIJhqCQqa8FF2A
  • Thread-topic: [Xen-devel] Ethernet PCI passthrough problem

On 05.07.2019 15:46, Frédéric Pierret  wrote:
> I'm experiencing problem to perform PCI passthrough of Ethernet card
> with 4 ports (HP Ethernet 1Gb 4-port 331FLR Adapter) on an HP DL360 Gen8.
> 
> I have two server like this one where the first is under CentOS and the
> other one, under Qubes. Under CentOS, the nics are not attached to any
> other domain and classical dmesg shows no errors (see attached
> 'centos_kvm.png'). It's working very well since long time.

The name of the image suggests this is under KVM, not Xen. The device
being at bus 3 rather than bus 0 also suggests this isn't inside a
Xen HVM DomU.

> I'm trying to
> switch these servers to Qubes and I'm facing trouble. In Qubes, we
> attach all the nics into a domain, usually called 'sys-net' in HVM mode.
> 
> The nics are attached with 'rdm_policy=relaxed' to 'sys-net' but are not
> loaded in the domain due to errors (see attached 'HVM_dom0.png' and
> 'HVM_sys_net.png').

The former of these shows a fundamental problem: Two of the RMRRs
overlap the BIOS area inside the guest. I'm afraid I don't see how
to deal with this (short of shuffling the BIOS elsewhere, which
imo is not really an option). I wonder how this gets dealt with in
the CentOS case, where you say things work (I take it that you've
verified that the RMRRs on both systems are at exactly the same
addresses).

And then I'm puzzled by there being further messages about 03:00.2,
suggesting that domain construction (or device assignment)
continues. Yet then the same messages don't appear for the other
two devices (you did say there are four of them, and other logs
also support this).

> I tried in PV mode, I got it working but I was not
> happy with that for security reason. I decided to update my bios to the
> most recent one, and even in PV, it does not work anymore (see attached
> 'PV_dom0.png' and 'PV_sys_net.png').

That'll require figuring out what exactly the driver isn't liking.
At the first glance I'm inclined to think the BIOS update broke
things.

> All have been tried under Qubes 4.0.1 (xen-4.8) and Qubes 4.1 under
> development (xen-4.12). Current attached log images are with xen-4.12.

You saying "log images" already points at a problem: Actual (and
complete as well as sufficiently verbose) log files would be more
helpful when diagnosing issues like this one.

Jan
_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxxx
https://lists.xenproject.org/mailman/listinfo/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.